Industry
Performance of Connectionist Learning Algorithms on 2-D SIMD Processor Arrays
Nuñez, Fernando J., Fortes, José A. B.
The mapping of the back-propagation and mean field theory learning algorithms onto a generic 2-D SIMD computer is described. This architecture proves to be very adequate for these applications since efficiencies close to the optimum can be attained. Expressions to find the learning rates are given and then particularized to the DAP array procesor.
A Reconfigurable Analog VLSI Neural Network Chip
Satyanarayana, Srinagesh, Tsividis, Yannis P., Graf, Hans Peter
The distributed-neuron synapses are arranged inblocks of 16, which we call '4 x 4 tiles'. Switch matrices are interleaved between each of these tiles to provide programmability ofinterconnections. With a small area overhead (15 %), the 1024 units of the network can be rearranged in various configurations. Someof the possible configurations are, a 12-32-12 network, a 16-12-12-16 network, two 12-32 networks etc. (the numbers separated bydashes indicate the number of units per layer, including the input layer). Weights are stored in analog form on MaS capacitors.
A Self-organizing Associative Memory System for Control Applications
ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) isfixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory andRobotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory systemAHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear processmodel and an appropriate nonlinear control strategy (Fig.1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory.The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required tolearn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.
Contour-Map Encoding of Shape for Early Vision
Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view.
Neural Networks: The Early Days
A short account is given of various investigations of neural network properties, beginning with the classic work of McCulloch & Pitts. Early work on neurodynamics and statistical mechanics, analogies with magnetic materials, fault tolerance via parallel distributed processing, memory, learning, and pattern recognition, is described.
Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning
Mel, Bartlett W., Koch, Christof
The goal in this work has been to identify the neuronal elements of the cortical column that are most likely to support the learning of nonlinear associative maps. We show that a particular style of network learning algorithm based on locally-tuned receptive fields maps naturally onto cortical hardware, and gives coherence to a variety of features of cortical anatomy, physiology, and biophysics whose relations to learning remain poorly understood.
Neural Network Simulation of Somatosensory Representational Plasticity
Grajski, Kamil A., Merzenich, Michael
The brain represents the skin surface as a topographic map in the somatosensory cortex. This map has been shown experimentally to be modifiable in a use-dependent fashion throughout life. We present a neural network simulation of the competitive dynamics underlying this cortical plasticity by detailed analysis of receptive field properties of model neurons during simulations of skin coactivation, cortical lesion, digit amputation and nerve section. 1 INTRODUCTION Plasticity of adult somatosensory cortical maps has been demonstrated experimentally in a variety of maps and species (Kass, et al., 1983; Wall, 1988). This report focuses on modelling primary somatosensory cortical plasticity in the adult monkey. We model the long-term consequences of four specific experiments, taken in pairs. With the first pair, behaviorally controlled stimulation of restricted skin surfaces (Jenkins, et al., 1990) and induced cortical lesions (Jenkins and Merzenich, 1987), we demonstrate that Hebbian-type dynamics is sufficient to account for the inverse relationship between cortical magnification (area of cortical map representing a unit area of skin) and receptive field size (skin surface which when stimulated excites a cortical unit) (Sur, et al., 1980; Grajski and Merzenich, 1990). These results are obtained with several variations of the basic model. We conclude that relying solely on cortical magnification and receptive field size will not disambiguate the contributions of each of the myriad circuits known to occur in the brain. With the second pair, digit amputation (Merzenich, et al., 1984) and peripheral nerve cut (without regeneration) (Merzenich, ct al., 1983), we explore the role of local excitatory connections in the model Neural Network Simulation of Somatosensory Representational Plasticity S3
A Self-organizing Associative Memory System for Control Applications
ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) is fixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory and Robotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory system AHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear process model and an appropriate nonlinear control strategy (Figure 1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory. The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required to learn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.
Time Dependent Adaptive Neural Networks
Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.