Goto

Collaborating Authors

 Hopfield, John J.


Dense Associative Memory for Pattern Recognition

Neural Information Processing Systems

A model of associative memory is studied, which stores and reliably retrieves many more patterns than the number of neurons in the network. We propose a simple duality between this dense associative memory and neural networks commonly used in deep learning. On the associative memory side of this duality, a family of models that smoothly interpolates between two limiting cases can be constructed. One limit is referred to as the feature-matching mode of pattern recognition, and the other one as the prototype regime. On the deep learning side of the duality, this family corresponds to feedforward neural networks with one hidden layer and various activation functions, which transmit the activities of the visible neurons to the hidden layer. This family of activation functions includes logistics, rectified linear units, and rectified polynomials of higher degrees. The proposed duality makes it possible to apply energy-based intuition from associative memory to analyze computational properties of neural networks with unusual activation functions - the higher rectified polynomials which until now have not been used in deep learning. The utility of the dense memories is illustrated for two test cases: the logical gate XOR and the recognition of handwritten digits from the MNIST data set.


Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks

Neural Information Processing Systems

A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations.The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics. The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[l, 2].


Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks

Neural Information Processing Systems

A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by minimax and dissipative Hamiltonian forms of the network dynamics.


Computing with Action Potentials

Neural Information Processing Systems

Most computational engineering based loosely on biology uses continuous variables to represent neural activity. Yet most neurons communicate with action potentials. The engineering view is equivalent to using a rate-code for representing information and for computing. An increasing number of examples are being discovered in which biology may not be using rate codes. Information can be represented using the timing of action potentials, and efficiently computed with in this representation. The "analog match" problem of odour identification is a simple problem which can be efficiently solved using action potential timing and an underlying rhythm.


Computing with Action Potentials

Neural Information Processing Systems

Brody t SamRoweis t Abstract Most computational engineering based loosely on biology uses continuous variablesto represent neural activity. Yet most neurons communicate with action potentials. The engineering view is equivalent to using a rate-code for representing information and for computing. An increasing numberof examples are being discovered in which biology may not be using rate codes. Information can be represented using the timing of action potentials, and efficiently computed with in this representation.


Modeling the Olfactory Bulb - Coupled Nonlinear Oscillators

Neural Information Processing Systems

A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations produce a 35-60 Hz modulated activity coherent across the bulb, mimicing the observed field potentials. The decision states (for the odor information) here can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis and simulations show that a group of coupled nonlinear oscillators are responsible for the oscillatory activities determined by the odor input, andthat the bulb, with appropriate inputs from higher centers, can enhance or suppress the sensitivity to partiCUlar odors. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex.


Modeling the Olfactory Bulb - Coupled Nonlinear Oscillators

Neural Information Processing Systems

A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations produce a 35-60 Hz modulated activity coherent across the bulb, mimicing the observed field potentials. The decision states (for the odor information) here can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis and simulations show that a group of coupled nonlinear oscillators are responsible for the oscillatory activities determined by the odor input, and that the bulb, with appropriate inputs from higher centers, can enhance or suppress the sensitivity to partiCUlar odors. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex.