Search

Workshop

Bayesian inference and learning with spiking neurons

  • Sophie Deneve (École Normale Supérieure, Paris, Paris, France)
G3 10 (Lecture hall)

Abstract

We show that the dynamics of spiking neurons can be interpreted as a form of Bayesian inference in time. Neurons that optimally integrates evidence about labile events in the external world exhibit properties similar to leaky integrate and fire neurons with spike-dependant adaptation, and maximally respond to fluctuations of their input. Spikes signals the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities.

We proceed to develop a theory of Bayesian learning in spiking neural networks, where neurons learn to recognize the spatial and temporal dynamics of their synaptic inputs. Meanwhile, successive layers of neurons learn hierarchical causal models for the sensory input. The resulting learning rules are local, spike time dependant, and highly non-linear. This approach provides a principled description of spiking and plasticity rules maximizing the information transfer, while limiting the number of costly spikes, between successive layers of neurons.

Antje Vandenberg

Max-Planck-Institut für Mathematik in den Naturwissenschaften Contact via Mail

Jürgen Jost

Max-Planck-Institut für Mathematik in den Naturwissenschaften

Henry Tuckwell

Max-Planck-Institut für Mathematik in den Naturwissenschaften