Goto

Collaborating Authors

 Country


A Self-organizing Associative Memory System for Control Applications

Neural Information Processing Systems

ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) is fixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory and Robotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory system AHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear process model and an appropriate nonlinear control strategy (Figure 1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory. The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required to learn the process behaviour. For a good performance of the control loop itยท is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.



Time Dependent Adaptive Neural Networks

Neural Information Processing Systems

Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.


Reading a Neural Code

Neural Information Processing Systems

Traditional methods of studying neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - decoding short segments of a spike train to extract information about an unknown, time-varying stimulus. Here we present strategies for characterizing the neural code from the point of view of the organism, culminating in algorithms for real-time stimulus reconstruction based on a single sample of the spike train. These methods are applied to the design and analysis of experiments on an identified movement-sensitive neuron in the fly visual system. As far as we know this is the first instance in which a direct "reading" of the neural code has been accomplished.


An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2

Neural Information Processing Systems

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2


Asymptotic Convergence of Backpropagation: Numerical Experiments

Neural Information Processing Systems

We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic results.


The Cascade-Correlation Learning Architecture

Neural Information Processing Systems

Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network.



Analytic Solutions to the Formation of Feature-Analysing Cells of a Three-Layer Feedforward Visual Information Processing Neural Net

Neural Information Processing Systems

Analytic solutions to the information-theoretic evolution equation of the connection strength of a three-layer feedforward neural net for visual information processing are presented. The results are (1) the receptive fields of the feature-analysing cells correspond to the eigenvector of the maximum eigenvalue of the Fredholm integral equation of the first kind derived from the evolution equation of the connection strength; (2) a symmetry-breaking mechanism (parity-violation) has been identified to be responsible for the changes of the morphology of the receptive field; (3) the conditions for the formation of different morphologies are explicitly identified.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

Oxford OX1 3PJ Edinburgh EH9 3JL U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.