Goto

Collaborating Authors

 Seung, H. Sebastian


Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks

Neural Information Processing Systems

Ascribing computational principles to neural feedback circuits is an important problem in theoretical neuroscience. We study symmetric threshold-linearnetworks and derive stability results that go beyond the insights that can be gained from Lyapunov theory or energy functions. By applying linear analysis to subnetworks composed ofcoactive neurons, we determine the stability of potential steady states. We find that stability depends on two types of eigenmodes. Onetype determines global stability and the other type determines whether or not multistability is possible.


Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks

Neural Information Processing Systems

It has long been known that lateral inhibition in neural networks can lead to a winner-take-all competition, so that only a single neuron is active at a steady state. Here we show how to organize lateral inhibition so that groups of neurons compete to be active. Given a collection of potentially overlappinggroups, the inhibitory connectivity is set by a formula that can be interpreted as arising from a simple learning rule. Our analysis demonstratesthat such inhibition generally results in winner-take-all competition between the given groups, with the exception of some degenerate cases.In a broader context, the network serves as a particular illustration of the general distinction between permitted and forbidden sets, which was introduced recently. From this viewpoint, the computational functionof our network is to store and retrieve memories as permitted setsof coactive neurons.


Spike-based Learning Rules and Stabilization of Persistent Neural Activity

Neural Information Processing Systems

We analyze the conditions under which synaptic learning rules based on action potential timing can be approximated by learning rules based on firing rates. In particular, we consider a form of plasticity in which synapses depress when a presynaptic spike is followed by a postsynaptic spike, and potentiate with the opposite temporal ordering. Such differential anti-Hebbian plasticity can be approximated under certain conditions by a learning rule that depends on the time derivative of the postsynaptic firing rate. Such a learning rule acts to stabilize persistent neural activity patterns in recurrent neural networks.


Spike-based Learning Rules and Stabilization of Persistent Neural Activity

Neural Information Processing Systems

We analyze the conditions under which synaptic learning rules based by learning rules basedon action potential timing can be approximated of plasticity in whichon firing rates. In particular, we consider a form synapses depress when a presynaptic spike is followed by a postsynaptic differentialspike, and potentiate with the opposite temporal ordering.


Learning Continuous Attractors in Recurrent Networks

Neural Information Processing Systems

One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn.


The Rectified Gaussian Distribution

Neural Information Processing Systems

This simple modification brings increased representational power, as illustrated by two multimodal examples of the rectified Gaussian, the competitive and the cooperative distributions. The modes of the competitive distribution are well-separated by regions of low probability. The modes of the cooperative distribution are closely spaced along a nonlinear continuous manifold. Neither distribution can be accurately approximated by a single standard Gaussian. In short, the rectified Gaussian is able to represent both discrete and continuous variability in a way that a standard Gaussian cannot.


Learning Continuous Attractors in Recurrent Networks

Neural Information Processing Systems

One approach to invariant object recognition employs a recurrent neural networkas an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, thenetwork develops a continuous attractor that models the manifold from which the patterns are drawn.



Learning Generative Models with the Up Propagation Algorithm

Neural Information Processing Systems

Up-propagation is an algorithm for inverting and learning neural network generative models Sensory input is processed by inverting a model that generates patterns from hidden variables using topdown connections The inversion process is iterative utilizing a negative feedback loop that depends on an error signal propagated by bottomup connections The error signal is also used to learn the generative model from examples The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits.


Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks

Neural Information Processing Systems

A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations.The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[l, 2].