Goto

Collaborating Authors

 plasticity learn stimulus intensity statistics


Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

Neural Information Processing Systems

Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.

  name change, plasticity learn stimulus intensity statistics, statistics, (4 more...)

Reviews: Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

Neural Information Processing Systems

Neurons with Intrinsic Plasticity Learn Stimulus Statistics In the paper, the author(s) build a generative product-poisson-gamma model to recognize images with wide dynamic range, by extending a previously used Poisson mixture model. An equivalent neural circuit model with unsupervised learning was also proposed to implement EM-like algorithms to find the model parameters in generative model. In the learning rule, the synaptic weights are changed according to a Hebb like rule with synaptic scaling, and intrinsic parameters lambda (Eq.7 in the paper) are updated depending on the net input. It is very interesting that the fixed points of the network updating parameters can have the same form with EM algorithm. However, I have the following questions about this paper: 1.


Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics

Monk, Travis, Savin, Cristina, Lücke, Jörg

Neural Information Processing Systems

Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP.

  neural circuit, plasticity learn stimulus intensity statistics, statistics, (2 more...)