Technology
Neural Analog Diffusion-Enhancement Layer and Spatio-Temporal Grouping in Early Vision
Waxman, Allen M., Seibert, Michael, Cunningham, Robert K., Wu, Jian
A new class of neural network aimed at early visual processing is described; we call it a Neural Analog Diffusion-Enhancement Layer or "NADEL." The network consists of two levels which are coupled through feedfoward and shunted feedback connections. The lower level is a two-dimensional diffusion map which accepts visual features as input, and spreads activity over larger scales as a function of time. The upper layer is periodically fed the activity from the diffusion layer and locates local maxima in it (an extreme form of contrast enhancement) using a network of local comparators. These local maxima are fed back to the diffusion layer using an on-center/off-surround shunting anatomy. The maxima are also available as output of the network. The network dynamics serves to cluster features on multiple scales as a function of time, and can be used in a variety of early visual processing tasks such as: extraction of comers and high curvature points along edge contours, line end detection, gap filling in contours, generation of fixation points, perceptual grouping on multiple scales, correspondence and path impletion in long-range apparent motion, and building 2-D shape representations that are invariant to location, orientation, scale, and small deformation on the visual field.
Models of Ocular Dominance Column Formation: Analytical and Computational Results
Miller, Kenneth D., Keller, Joseph B., Stryker, Michael P.
In the developing visual system in many mammalian species, there is initially a uniform, overlapping innervation of layer 4 of the visual cortex by inputs representing the two eyes. Subsequently, these inputs segregate into patches or stripes that are largely or exclusively innervated by inputs serving a single eye, known as ocular dominance patches. The ocular dominance patches are on a small scale compared to the map of the visual world, so that the initially continuous map becomes two interdigitated maps, one representing each eye. These patches, together with the layers of cortex above and below layer 4, whose responses are dominated by the eye innervating the corresponding layer 4 patch, are known as ocular dominance columns.
Linear Learning: Landscapes and Algorithms
What follows extends some of our results of [1] on learning from examples in layered feed-forward networks of linear units. In particular we examine what happens when the ntunber of layers is large or when the connectivity between layers is local and investigate some of the properties of an autoassociative algorithm. Notation will be as in [1] where additional motivations and references can be found. It is usual to criticize linear networks because "linear functions do not compute" and because several layers can always be reduced to one by the proper multiplication of matrices. However this is not the point of view adopted here.
Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus
Stanton, Patric K., Sejnowski, Terrence J.
We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or L TP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative L TP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity.
Adaptive Neural Networks Using MOS Charge Storage
Schwartz, Daniel B., Howard, R. E., Hubbard, Wayne E.
However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/'
Analog Implementation of Shunting Neural Networks
Nabet, Bahram, Darling, Robert B., Pinter, Robert B.
The first case shows recurrent activity, while the second case is non-recurrent or feed forward. The polarity of these terms signify excitatory or inhibitory interactions. Shunting network equations can be derived from various sources such as the passive membrane equation with synaptic interaction (Grossberg 1973, Pinter 1983), models of dendritic interaction (RaIl 1977), or experiments on motoneurons (Ellias and Grossberg 1975).
Heterogeneous Neural Networks for Adaptive Behavior in Dynamic Environments
Beer, Randall D., Chiel, Hillel J., Sterling, Leon S.
This heterogeneity is crucial to the flexible generation of behavior which is essential for survival in a complex, dynamic environment. It may also provide powerful insights into the design of artificial neural networks. In this paper, we describe a heterogeneous neural network for controlling the wa1king of a simulated insect. This controller is inspired by the neuroethological and neurobiological literature on insect locomotion. It exhibits a variety of statically stable gaits at different speeds simply by varying the tonic activity of a single cell. It can also adapt to perturbations as a natural consequence of its design. INTRODUCTION Even very simple animals exhibit a dazzling variety of complex behaviors which they continuously adapt to the changing circumstances of their environment. Nervous systems evolved in order to generate appropriate behavior in dynamic, uncertain situations and thus insure the survival of the organisms containing them.
Automatic Local Annealing
ABSTRACT This research involves a method for finding global maxima in constraint satisfaction networks. It is an annealing process butt unlike most otherst requires no annealing schedule. Temperature is instead determined locally by units at each updatet and thus all processing is done at the unit level. There are two major practical benefits to processing this way: 1) processing can continue in'bad t areas of the networkt while'good t areas remain stablet and 2) processing continues in the'bad t areast as long as the constraints remain poorly satisfied (i.e. it does not stop after some predetermined number of cycles). As a resultt this method not only avoids the kludge of requiring an externally determined annealing schedulet but it also finds global maxima more quickly and consistently than externally scheduled systems (a comparison to the Boltzmann machine (Ackley et alt 1985) is made).
Performance of Synthetic Neural Network Classification of Noisy Radar Signals
Ahalt, Stanley C., Garber, F. D., Jouny, I., Krishnamurthy, Ashok K.
This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance.