Goto

Collaborating Authors

Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints

Neural Information Processing Systems

Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic processes, which have previously received little attention in this context, can overcome common theoretical limitations and facilitate the neural implementation and performance of existing models. In particular, we show that homeostatic plasticity can be understood as the enforcement of a 'balancing' posterior constraint during probabilistic inference and learning with Expectation Maximization. We link homeostatic dynamics to the theory of variational inference, and show that nontrivial terms, which typically appear during probabilistic inference in a large class of models, drop out. We demonstrate the feasibility of our approach in a spiking Winner-Take-All architecture of Bayesian inference and learning. Finally, we sketch how the mathematical framework can be extended to richer recurrent network architectures. Altogether, our theory provides a novel perspective on the interplay of homeostatic processes and synaptic plasticity in cortical microcircuits, and points to an essential role of homeostasis during inference and learning in spiking networks.


Perfect Associative Learning with Spike-Timing-Dependent Plasticity

Neural Information Processing Systems

Recent extensions of the Perceptron, as e.g. the Tempotron, suggest that this theoretical concept is highly relevant also for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron and of its variants might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if the respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons are efficiently learned. The proposed mechanism might underly the acquisition of mappings of spatio-temporal activity patterns in one area of the brain onto other spatio-temporal spike patterns in another region and of long term memories in cortex.


Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring

Neural Information Processing Systems

We reexamine in this article the conceptual and mathematical framework for understanding the organization of plasticity in spiking neural networks. We propose that inherent stochasticity enables synaptic plasticity to carry out probabilistic inference by sampling from a posterior distribution of synaptic parameters. This view provides a viable alternative to existing models that propose convergence of synaptic weights to maximum likelihood parameters. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience. In simulations we show that our model for synaptic plasticity allows spiking neural networks to compensate continuously for unforeseen disturbances. Furthermore it provides a normative mathematical framework to better understand the permanent variability and rewiring observed in brain networks.


Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

Neural Information Processing Systems

Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP.


Multiple Plasticity Mechanisms Enhance Associative Memory Retrieval in a Spiking Network Model of the Hippocampus

AAAI Conferences

Hippocampal CA3 is crucial for long-term associative memory. CA3 has heavily recurrent connectivity, and memories are thought to be stored as the pattern of synaptic weights in CA3. However, despite the well-known importance of the hippocampus for memory storage and retrieval, up until now, spiking neural network models of this crucial function only exist as small-scale, proof-of-concept models. Our work is the first to develop a biologically plausible spiking neural network model of hippocampus memory encoding and retrieval, with over two orders-of-magnitude as many neurons in CA3 as previous models. It is also the first to investigate the effect of neurogenesis in the dentate gyrus on a spiking model of CA3. Using this model, we first show that a recently developed plasticity rule is crucial for good encoding and retrieval. Then, we show how neural properties related to neurogenesis and neuronal death enhance storage and retrieval of associative memories in the recurrently connected CA3.