Goto

Collaborating Authors

 Country


An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex

Neural Information Processing Systems

The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images duringrapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated(VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.


Computational Efficiency: A Common Organizing Principle for Parallel Computer Maps and Brain Maps?

Neural Information Processing Systems

It is well-known that neural responses in particular brain regions are spatially organized, but no general principles have been developed thatrelate the structure of a brain map to the nature of the associated computation. On parallel computers, maps of a sort quite similar to brain maps arise when a computation is distributed across multiple processors. In this paper we will discuss the relationship betweenmaps and computations on these computers and suggest how similar considerations might also apply to maps in the brain. 1 INTRODUCTION A great deal of effort in experimental and theoretical neuroscience is devoted to recording and interpreting spatial patterns of neural activity. A variety of map patterns have been observed in different brain regions and, presumably, these patterns reflectsomething about the nature of the neural computations being carried out in these regions. To date, however, there have been no general principles for interpreting the structure of a brain map in terms of properties of the associated computation. In the field of parallel computing, analogous maps arise when a computation isdistributed across multiple processors and, in this case, the relationship Computational Eftkiency 61 between maps and computations is better understood. In this paper, we will attempt torelate some of the mapping principles from the field of parallel computing to the organization of brain maps.


Meiosis Networks

Neural Information Processing Systems

A central problem in connectionist modelling is the control of network and architectural resources during learning. In the present approach, weights reflect a coarse prediction history as coded by a distribution of values and parameterized in the mean and standard deviation of these weight distributions. Weight updates are a function of both the mean and standard deviation of each connection in the network and vary as a function of the error signal ("stochastic delta rule"; Hanson, 1990). Consequently, the weights maintain information on their central tendency and their "uncertainty" in prediction. Such information is useful in establishing a policy concerning the size of the nodal complexity of the network and growth of new nodes. For example, during problem solving the present network can undergo "meiosis", producing two nodes where there was one "overtaxed" node as measured by its coefficient of variation. It is shown in a number of benchmark problems that meiosis networks can find minimal architectures, reduce computational complexity, and overall increase the efficiency of the representation learning interaction.


A Neural Network for Real-Time Signal Processing

Neural Information Processing Systems

This paper describes a neural network algorithm that (1) performs temporal pattern matching in real-time, (2) is trained online, with a single pass, (3) requires only a single template for training of each representative class, (4) is continuously adaptable to changes in background noise, (5) deals with transient signals having low signalto-noise ratios,(6) works in the presence of non-Gaussian noise, (7) makes use of context dependencies and (8) outputs Bayesian probability estimates.The algorithm has been adapted to the problem of passive sonar signal detection and classification. It runs on a Connection Machineand correctly classifies, within 500 ms of onset, signals embedded in noise and subject to considerable uncertainty. 1 INTRODUCTION This paper describes a neural network algorithm, STOCHASM, that was developed for the purpose of real-time signal detection and classification. Of prime concern was capability for dealing with transient signals having low signal-to-noise ratios (SNR). The algorithm was first developed in 1986 for real-time fault detection and diagnosis of malfunctions in ship gas turbine propulsion systems (Malkoff, 1987).



Sequential Decision Problems and Neural Networks

Neural Information Processing Systems

Decision making tasks that involve delayed consequences are very common yet difficult to address with supervised learning methods. If there is an accurate model of the underlying dynamical system, then these tasks can be formulated as sequential decision problems and solved by Dynamic Programming. This paper discusses reinforcement learningin terms of the sequential decision framework and shows how a learning algorithm similar to the one implemented by the Adaptive Critic Element used in the pole-balancer of Barto, Sutton, and Anderson (1983), and further developed by Sutton (1984), fits into this framework. Adaptive neural networks can play significant roles as modules for approximating the functions required for solving sequential decision problems.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.


The CHIR Algorithm for Feed Forward Networks with Binary Weights

Neural Information Processing Systems

A new learning algorithm, Learning by Choice of Internal Represetations (CHIR),was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperative adaptation of the weights (e.g. by using the perceptron learning rule). Since the introduction of its basic, single output version, theCHIR algorithm was generalized to train any feed forward network of binary neurons. Here we present the generalised version of the CHIR algorithm, and further demonstrate its versatility by describing how it can be modified in order to train networks with binary ( 1) weights. Preliminary tests of this binary version on the random teacher problem are also reported.


Can Simple Cells Learn Curves? A Hebbian Model in a Structured Environment

Neural Information Processing Systems

In the mammalian visual cortex, orientation-selective'simple cells' which detect straight lines may be adapted to detect curved lines instead. We test a biologically plausible, Hebbian, single-neuron model, which learns oriented receptive fields upon exposure to unstructured (noise)input and maintains orientation selectivity upon exposure to edges or bars of all orientations and positions. This model can also learn arc-shaped receptive fields upon exposure to an environment of only circular rings. Thus, new experiments which try to induce an abnormal (curved) receptive field may provide insightinto the plasticity of simple cells. The model suggests that exposing cells to only a single spatial frequency may induce more striking spatial frequency and orientation dependent effects than heretofore observed.


Generalized Hopfield Networks and Nonlinear Optimization

Neural Information Processing Systems

Purdue University W. Lafayette, IN. 47907 ABSTRACT A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).