Information Technology
Sigma-Pi Learning: On Radial Basis Functions and Cortical Associative Learning
Mel, Bartlett W., Koch, Christof
The goal in this work has been to identify the neuronal elements of the cortical column that are most likely to support the learning of nonlinear associative maps. We show that a particular style of network learning algorithm based on locally-tuned receptive fields maps naturally onto cortical hardware, and gives coherence to a variety of features of cortical anatomy, physiology, and biophysics whose relations to learning remain poorly understood.
Asymptotic Convergence of Backpropagation: Numerical Experiments
Ahmad, Subutai, Tesauro, Gerald, He, Yu
We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic results.
Neural Network Simulation of Somatosensory Representational Plasticity
Grajski, Kamil A., Merzenich, Michael
The brain represents the skin surface as a topographic map in the somatosensory cortex. This map has been shown experimentally to be modifiable in a use-dependent fashion throughout life. We present a neural network simulation of the competitive dynamics underlying this cortical plasticity by detailed analysis of receptive field properties of model neurons during simulations of skin coactivation, cortical lesion, digit amputation and nerve section. 1 INTRODUCTION Plasticity of adult somatosensory cortical maps has been demonstrated experimentally in a variety of maps and species (Kass, et al., 1983; Wall, 1988). This report focuses on modelling primary somatosensory cortical plasticity in the adult monkey. We model the long-term consequences of four specific experiments, taken in pairs. With the first pair, behaviorally controlled stimulation of restricted skin surfaces (Jenkins, et al., 1990) and induced cortical lesions (Jenkins and Merzenich, 1987), we demonstrate that Hebbian-type dynamics is sufficient to account for the inverse relationship between cortical magnification (area of cortical map representing a unit area of skin) and receptive field size (skin surface which when stimulated excites a cortical unit) (Sur, et al., 1980; Grajski and Merzenich, 1990). These results are obtained with several variations of the basic model. We conclude that relying solely on cortical magnification and receptive field size will not disambiguate the contributions of each of the myriad circuits known to occur in the brain. With the second pair, digit amputation (Merzenich, et al., 1984) and peripheral nerve cut (without regeneration) (Merzenich, ct al., 1983), we explore the role of local excitatory connections in the model Neural Network Simulation of Somatosensory Representational Plasticity S3
A Self-organizing Associative Memory System for Control Applications
ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) is fixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory and Robotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory system AHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear process model and an appropriate nonlinear control strategy (Figure 1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory. The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required to learn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.
Time Dependent Adaptive Neural Networks
Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.
Reading a Neural Code
Bialek, William, Rieke, Fred, Steveninck, Robert R. de Ruyter van, Warland, David
Traditional methods of studying neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - decoding short segments of a spike train to extract information about an unknown, time-varying stimulus. Here we present strategies for characterizing the neural code from the point of view of the organism, culminating in algorithms for real-time stimulus reconstruction based on a single sample of the spike train. These methods are applied to the design and analysis of experiments on an identified movement-sensitive neuron in the fly visual system. As far as we know this is the first instance in which a direct "reading" of the neural code has been accomplished.
An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2
Zhang, Xiru, McKenna, Michael, Mesirov, Jill P., Waltz, David L.
In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2
Asymptotic Convergence of Backpropagation: Numerical Experiments
Ahmad, Subutai, Tesauro, Gerald, He, Yu
We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic results.
The Cascade-Correlation Learning Architecture
Fahlman, Scott E., Lebiere, Christian
Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network.