Goto

Collaborating Authors

 Neural Information Processing Systems


Associative Memory in a Network of `Biological' Neurons

Neural Information Processing Systems

The Hopfield network (Hopfield, 1982,1984) provides a simple model of an associative memory in a neuronal structure. This model, however, is based on highly artificial assumptions, especially the use of formal-two state neurons (Hopfield,1982) or graded-response neurons (Hopfield, 1984).


CAM Storage of Analog Patterns and Continuous Sequences with 3N2 Weights

Neural Information Processing Systems

Box 808 (L-426), Livermore, Ca. 94550 A simple architecture and algorithm for analytically guaranteed associative memorystorage of analog patterns, continuous sequences, and chaotic attractors in the same network is described. A matrix inversion determines network weights, given prototype patterns to be stored.


Dynamics of Learning in Recurrent Feature-Discovery Networks

Neural Information Processing Systems

The self-organization of recurrent feature-discovery networks is studied from the perspective of dynamical systems. Bifurcation theory reveals parameter regimesin which multiple equilibria or limit cycles coexist with the equilibrium at which the networks perform principal component analysis.


Phase-coupling in Two-Dimensional Networks of Interacting Oscillators

Neural Information Processing Systems

Coherent oscillatory activity in large networks of biological or artificial neuralunits may be a useful mechanism for coding information pertaining to a single perceptual object or for detailing regularities within a data set. We consider the dynamics of a large array of simple coupled oscillators under a variety of connection schemes. Of particular interest is the rapid and robust phase-locking that results from a "sparse" scheme where each oscillator is strongly coupled to a tiny, randomly selected, subset of its neighbors.


Stochastic Neurodynamics

Neural Information Processing Systems

The main point of this paper is that stochastic neural networks have a mathematical structure that corresponds quite closely with that of quantum field theory. Neural network Liouvillians and Lagrangians can be derived, just as can spin Hamiltonians and Lagrangians in QFf. It remains to show the efficacy of such a description.


Cholinergic Modulation May Enhance Cortical Associative Memory Function

Neural Information Processing Systems

Combining neuropharmacological experiments with computational modeling, we have shown that cholinergic modulation may enhance associative memory function in piriform (olfactory) cortex. We have shown that the acetylcholine analogue carbachol selectively suppresses synaptic transmission between cells within piriform cortex, while leaving input connections unaffected. When tested in a computational model of piriform cortex, this selective suppression, applied during learning, enhances associative memory performance.


A Recurrent Neural Network Model of Velocity Storage in the Vestibulo-Ocular Reflex

Neural Information Processing Systems

A three-layered neural network model was used to explore the organization of the vestibulo-ocular reflex (VOR). The dynamic model was trained using recurrent back-propagation to produce compensatory, long duration eye muscle motoneuron outputs in response to short duration vestibular afferent head velocity inputs. The network learned to produce this response prolongation, known as velocity storage, by developing complex, lateral inhibitory interactions among the interneurons. These had the low baseline, long time constant, rectified and skewed responses that are characteristic of real VOR interneurons. The model suggests that all of these features are interrelated and result from lateral inhibition.


Neural Network Application to Diagnostics and Control of Vehicle Control Systems

Neural Information Processing Systems

Diagnosis of faults in complex, real-time control systems is a complicated task that has resisted solution by traditional methods. We have shown that neural networks can be successfully employed to diagnose faults in digitally controlled powertrain systems. This paper discusses the means we use to develop the appropriate databases for training and testing in order to select the optimum network architectures and to provide reasonable estimates of the classification accuracy of these networks on new samples of data.


A VLSI Neural Network for Color Constancy

Neural Information Processing Systems

A system for color correction has been designed, built, and tested successfully; theessential components are three custom chips built using subthreshold analogCMOS VLSI. The system, based on Land's Retinex theory of color constancy, produces colors similar in many respects to those produced by the visual system. Resistive grids implemented in analog VLSI perform the smoothing operation central to the algorithm at video rates. With the electronic system, the strengths and weaknesses of the algorithm are explored.


Optimal Sampling of Natural Images: A Design Principle for the Visual System

Neural Information Processing Systems

One ofthe major theoretical issues in neural computation is to understand how this efficiency is reached given the constraints imposed by the biological hardware. Part of the problem [2] is simply to give an informative representation ofthe visual world using a limited number of neurons, each of which has a limited information capacity. The information capacity of the visual system is determined in part by the spatial transfer characteristics, or "receptive fields," of the individual cells. From a theoretical point of view we can ask if there exists an optimal choice for these receptive fields, a choice which maximizes the information transfer through the system given the hardware constraints. We show that this optimization problem has a simple formulation which allows us to use the intuition developed through the variational approach to quantum mechanics. In general our approach leads to receptive fields which are quite unlike those observed forcells in the visual cortex. In particular orientation selectivity is not a generic prediction. The optimal filters, however, depend on the statistical properties ofthe images we are trying to sample. Natural images have a symmetry - scale invariance [4] - which saves the theory: The optimal receptive fields for sampling of natural images are indeed orientation selective and bear a striking resemblance to observed receptive field characteristics in the mammalian visual cortex as well as the retinal ganglion of lower vertebrates.