Associative Memory in a Simple Model of Oscillating Cortex

Neural Information Processing Systems

A generic model of oscillating cortex, which assumes "minimal" coupling justified by known anatomy, is shown to function as an associative memory,using previously developed theory. The network has explicit excitatory neurons with local inhibitory interneuron feedback that forms a set of nonlinear oscillators coupled only by long range excitatofy connections. Using a local Hebb-like learning rule for primary and higher order synapses at the ends of the long range connections, the system learns to store the kinds of oscillation amplitudepatterns observed in olfactory and visual cortex. This rule is derived from a more general "projection algorithm" for recurrent analog networks, that analytically guarantees content addressable memory storage of continuous periodic sequences - capacity: N/2 Fourier components for an N node network - no "spurious" attractors. 1 Introduction This is a sketch of recent results stemming from work which is discussed completely in [1, 2, 3]. Patterns of 40 to 80 hz oscillation have been observed in the large scale activity of olfactory cortex [4] and visual neocortex [5], and shown to predict the olfactory and visual pattern recognition responses of a trained animal.


Non-Boltzmann Dynamics in Networks of Spiking Neurons

Neural Information Processing Systems

We study networks of spiking neurons in which spikes are fired as a Poisson process. The state of a cell is determined by the instantaneous firingrate, and in the limit of high firing rates our model reduces to that studied by Hopfield. We find that the inclusion of spiking results in several new features, such as a noise-induced asymmetry between "on" and "off" states of the cells and probability currentswhich destroy the usual description of network dynamics interms of energy surfaces. Taking account of spikes also allows usto calibrate network parameters such as "synaptic weights" against experiments on real synapses. Realistic forms of the post synaptic response alters the network dynamics, which suggests a novel dynamical learning mechanism.


A Neural Network for Feature Extraction

Neural Information Processing Systems

The paper suggests a statistical framework for the parameter estimation problemassociated with unsupervised learning in a neural network, leading to an exploratory projection pursuit network that performs feature extraction, or dimensionality reduction.


The Cocktail Party Problem: Speech/Data Signal Separation Comparison between Backpropagation and SONN

Neural Information Processing Systems

Parallel Distributed Structures Laboratory School of Electrical Engineering Purdue University W. Lafayette, IN. 47907 ChristophSchaefers ABSTRACT This work introduces a new method called Self Organizing Neural Network (SONN) algorithm and compares its performance with Back Propagation in a signal separation application. The problem is to separate two signals; a modem data signal and a male speech signal, added and transmitted through a 4 khz channel. The signals are sampled at8 khz, and using supervised learning, an attempt is made to reconstruct them. The SONN is an algorithm that constructs its own network topology during training, which is shown to be much smaller than the BP network, faster to trained, and free from the trial-anderror networkdesign that characterize BP. 1. INTRODUCTION The research in Neural Networks has witnessed major changes in algorithm design focus, motivated by the limitations perceived in the algorithms available at the time. With the extensive work performed in that last few years using multilayered networks, it was soon discovered that these networks present limitations in tasks The Cocktail Party Problem: 543 that: (a) are difficult to determine problem complexity a priori, and thus design network of the correct size, (b) training not only takes prohibitively long times, but requires a large number of samples as well as fine parameter adjustment, without guarantee of convergence, (c) such networks do not handle the system identification task efficiently for systems whose time varying structure changes radically, and, (d) the trained network is little more than a black box of weights and connections, revealing little about the problem structure; being hard to find the justification for the algorithm weight choice, or an explanation for the output decisions based on an input vector.


Performance of Connectionist Learning Algorithms on 2-D SIMD Processor Arrays

Neural Information Processing Systems

The mapping of the back-propagation and mean field theory learning algorithms onto a generic 2-D SIMD computer is described. This architecture proves to be very adequate for these applications since efficiencies close to the optimum can be attained. Expressions to find the learning rates are given and then particularized to the DAP array procesor.




A Reconfigurable Analog VLSI Neural Network Chip

Neural Information Processing Systems

The distributed-neuron synapses are arranged inblocks of 16, which we call '4 x 4 tiles'. Switch matrices are interleaved between each of these tiles to provide programmability ofinterconnections. With a small area overhead (15 %), the 1024 units of the network can be rearranged in various configurations. Someof the possible configurations are, a 12-32-12 network, a 16-12-12-16 network, two 12-32 networks etc. (the numbers separated bydashes indicate the number of units per layer, including the input layer). Weights are stored in analog form on MaS capacitors.


Asymptotic Convergence of Backpropagation: Numerical Experiments

Neural Information Processing Systems

We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm fornetworks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections fornetworks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence ratein numerical simulations, which confirm our analytic results.


A Self-organizing Associative Memory System for Control Applications

Neural Information Processing Systems

ABSTRACT The CHAC storage scheme has been used as a basis for a software implementation of an associative .emory A major disadvantage of this CHAC-concept is that the degree of local generalization (area of interpolation) isfixed. This paper deals with an algorithm for self-organizing variable generalization for the AKS, based on ideas of T. Kohonen. 1 INTRODUCTION For several years research at the Department of Control Theory andRobotics at the Technical University of Darmstadt has been concerned with the design of a learning real-time control loop with neuron-like associative memories (LERNAS) A Self-organizing Associative Memory System for Control Applications 333 for the control of unknown, nonlinear processes (Ersue, Tolle, 1988). This control concept uses an associative memory systemAHS, based on the cerebellar cortex model CHAC by Albus (Albus, 1972), for the storage of a predictive nonlinear processmodel and an appropriate nonlinear control strategy (Fig.1). Figure 1: The learning control loop LERNAS One problem for adjusting the control loop to a process is, however, to find a suitable set of parameters for the associative memory.The parameters in question determine the degree of generalization within the memory and therefore have a direct influence on the number of training steps required tolearn the process behaviour. For a good performance of the control loop it· is desirable to have a very small generalization around a given setpoint but to have a large generalization elsewhere.