Goto

Collaborating Authors

 Industry


Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus

Neural Information Processing Systems

We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or L TP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative L TP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity.


A Network for Image Segmentation Using Color

Neural Information Processing Systems

Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmenting the scene into regions of different colors, does not require color constancy.


Adaptive Neural Networks Using MOS Charge Storage

Neural Information Processing Systems

However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/'


Analog Implementation of Shunting Neural Networks

Neural Information Processing Systems

The first case shows recurrent activity, while the second case is non-recurrent or feed forward. The polarity of these terms signify excitatory or inhibitory interactions. Shunting network equations can be derived from various sources such as the passive membrane equation with synaptic interaction (Grossberg 1973, Pinter 1983), models of dendritic interaction (RaIl 1977), or experiments on motoneurons (Ellias and Grossberg 1975).


Heterogeneous Neural Networks for Adaptive Behavior in Dynamic Environments

Neural Information Processing Systems

This heterogeneity is crucial to the flexible generation of behavior which is essential for survival in a complex, dynamic environment. It may also provide powerful insights into the design of artificial neural networks. In this paper, we describe a heterogeneous neural network for controlling the wa1king of a simulated insect. This controller is inspired by the neuroethological and neurobiological literature on insect locomotion. It exhibits a variety of statically stable gaits at different speeds simply by varying the tonic activity of a single cell. It can also adapt to perturbations as a natural consequence of its design. INTRODUCTION Even very simple animals exhibit a dazzling variety of complex behaviors which they continuously adapt to the changing circumstances of their environment. Nervous systems evolved in order to generate appropriate behavior in dynamic, uncertain situations and thus insure the survival of the organisms containing them.


Performance of Synthetic Neural Network Classification of Noisy Radar Signals

Neural Information Processing Systems

This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance.


Efficient Parallel Learning Algorithms for Neural Networks

Neural Information Processing Systems

Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergence properties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm. These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable.


Theory of Self-Organization of Cortical Maps

Neural Information Processing Systems

We have mathematically shown that cortical maps in the primary sensory cortices can be reproduced by using three hypotheses which have physiological basis and meaning. Here, our main focus is on ocular.dominance


A Low-Power CMOS Circuit Which Emulates Temporal Electrical Properties of Neurons

Neural Information Processing Systems

Popular neuron models are based upon some statistical measure of known natural behavior. Whether that measure is expressed in terms of average firing rate or a firing probability, the instantaneous neuron activation is only represented in an abstract sense. Artificial electronic neurons derived from these models represent this excitation level as a binary code or a continuous voltage at the output of a summing amplifier. While such models have been shown to perform well for many applications, and form an integral part of much current work, they only partially emulate the manner in which natural neural networks operate. They ignore, for example, differences in relative arrival times of neighboring action potentials -- an important characteristic known to exist in natural auditory and visual networks {Sejnowski, 1986}. They are also less adaptable to fme-grained, neuron-centered learning, like the post-tetanic facilitation observed in natural neurons. We are investigating the implementation and application of neuron circuits which better approximate natural neuron function.