Goto

Collaborating Authors

 Machine Learning


A Self-organizing Associative Memory System for Control Applications

Neural Information Processing Systems

ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) isfixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory andRobotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory systemAHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear processmodel and an appropriate nonlinear control strategy (Fig.1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory.The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required tolearn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.


Contour-Map Encoding of Shape for Early Vision

Neural Information Processing Systems

Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view.


Neural Network Weight Matrix Synthesis Using Optimal Control Techniques

Neural Information Processing Systems

Given a set of input-output training samples, we describe a procedure fordetermining the time sequence of weights for a dynamic neural network to model an arbitrary input-output process. We formulate the input-output mapping problem as an optimal control problem,defining a performance index to be minimized as a function of time-varying weights.


Maximum Likelihood Competitive Learning

Neural Information Processing Systems

One popular class of unsupervised algorithms are competitive algorithms. In the traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptation as attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihood fit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement of radial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. 1 INTRODUCTION Interest in unsupervised learning has increased recently due to the application of more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988; Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker, 1986; Kohonen, 1982). One popular class of unsupervised algorithms are competitive algorithms, which have appeared as components in a variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978). Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive system consists of a collection of modules which are structurally identical except, possibly, for random initial parameter variation.


Neural Networks: The Early Days

Neural Information Processing Systems

A short account is given of various investigations of neural network properties, beginning with the classic work of McCulloch & Pitts. Early work on neurodynamics and statistical mechanics, analogies with magnetic materials, fault tolerance via parallel distributed processing, memory, learning, and pattern recognition, is described.


Subgrouping Reduces Complexity and Speeds Up Learning in Recurrent Networks

Neural Information Processing Systems

Recurrent nets are more powerful than feedforward nets because they allow simulation of dynamical systems. Everything from sine wave generators through computers to the brain are potential candidates, but to use recurrent nets to emulate dynamical systems we need learning algorithms to program them. Here I describe a new twist on an old algorithm for recurrent nets and compare it to its predecessors.


Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning

Neural Information Processing Systems

The goal in this work has been to identify the neuronal elements of the cortical column that are most likely to support the learning of nonlinear associative maps. We show that a particular style of network learning algorithm based on locally-tuned receptive fields maps naturally onto cortical hardware, and gives coherence to a variety of features of cortical anatomy, physiology, and biophysics whose relations to learning remain poorly understood.


Asymptotic Convergence of Backpropagation: Numerical Experiments

Neural Information Processing Systems

We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic results.


Neural Network Simulation of Somatosensory Representational Plasticity

Neural Information Processing Systems

The brain represents the skin surface as a topographic map in the somatosensory cortex. This map has been shown experimentally to be modifiable in a use-dependent fashion throughout life. We present a neural network simulation of the competitive dynamics underlying this cortical plasticity by detailed analysis of receptive field properties of model neurons during simulations of skin coactivation, cortical lesion, digit amputation and nerve section. 1 INTRODUCTION Plasticity of adult somatosensory cortical maps has been demonstrated experimentally in a variety of maps and species (Kass, et al., 1983; Wall, 1988). This report focuses on modelling primary somatosensory cortical plasticity in the adult monkey. We model the long-term consequences of four specific experiments, taken in pairs. With the first pair, behaviorally controlled stimulation of restricted skin surfaces (Jenkins, et al., 1990) and induced cortical lesions (Jenkins and Merzenich, 1987), we demonstrate that Hebbian-type dynamics is sufficient to account for the inverse relationship between cortical magnification (area of cortical map representing a unit area of skin) and receptive field size (skin surface which when stimulated excites a cortical unit) (Sur, et al., 1980; Grajski and Merzenich, 1990). These results are obtained with several variations of the basic model. We conclude that relying solely on cortical magnification and receptive field size will not disambiguate the contributions of each of the myriad circuits known to occur in the brain. With the second pair, digit amputation (Merzenich, et al., 1984) and peripheral nerve cut (without regeneration) (Merzenich, ct al., 1983), we explore the role of local excitatory connections in the model Neural Network Simulation of Somatosensory Representational Plasticity S3


A Self-organizing Associative Memory System for Control Applications

Neural Information Processing Systems

ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) is fixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory and Robotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory system AHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear process model and an appropriate nonlinear control strategy (Figure 1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory. The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required to learn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.