Plotting

 Information Technology


Reading a Neural Code

Neural Information Processing Systems

Traditional methods of studying neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - decoding short segments of a spike train to extract information about an unknown, time-varying stimulus. Here we present strategies for characterizing the neural code from the point of view of the organism, culminating in algorithms for real-time stimulus reconstruction based on a single sample of the spike train. These methods are applied to the design and analysis of experiments on an identified movement-sensitive neuron in the fly visual system. As far as we know this is the first instance in which a direct "reading" of the neural code has been accomplished.


An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2

Neural Information Processing Systems

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2


Asymptotic Convergence of Backpropagation: Numerical Experiments

Neural Information Processing Systems

We have calculated, both analytically and in simulations, the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic finding for units using the standard sigmoid transfer function is lit convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions may lead to a 8lower polynomial rate of convergence. Our analytic calculations were presented in (Tesauro, He & Ahamd, 1989). Here we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic results.


The Cascade-Correlation Learning Architecture

Neural Information Processing Systems

Cascade-Correlation is a new architecture and supervised learning algorithm for artificial neural networks. Instead of just adjusting the weights in a network of fixed topology. Cascade-Correlation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the network, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the network, available for producing outputs or for creating other, more complex feature detectors. The Cascade-Correlation architecture has several advantages over existing algorithms: it learns very quickly, the network.



Analytic Solutions to the Formation of Feature-Analysing Cells of a Three-Layer Feedforward Visual Information Processing Neural Net

Neural Information Processing Systems

Analytic solutions to the information-theoretic evolution equation of the connection strength of a three-layer feedforward neural net for visual information processing are presented. The results are (1) the receptive fields of the feature-analysing cells correspond to the eigenvector of the maximum eigenvalue of the Fredholm integral equation of the first kind derived from the evolution equation of the connection strength; (2) a symmetry-breaking mechanism (parity-violation) has been identified to be responsible for the changes of the morphology of the receptive field; (3) the conditions for the formation of different morphologies are explicitly identified.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

Oxford OX1 3PJ Edinburgh EH9 3JL U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.


Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks

Neural Information Processing Systems

Electronic synapses based on CMOS, EEPROM, as well as thin film technologies are actively being developed [1-5]. One preferred approach is based on a hybrid digital-analog design which can easily be implemented in CMOS with simple interface and analog circuitry. The hybrid design utilizes digital memories to store the synaptic weights and digital-to-analog converters to perform analog multiplication. A variety of synaptiC chips based on such hybrid designs have been developed and used as "building blocks" in larger neural network hardware systems fabricated at JPL. In this paper, the design and operational characteristics of the hybrid synapse chips are described.


Analysis of Linsker's Simulations of Hebbian Rules

Neural Information Processing Systems

Linsker has reported the development of centre---surround receptive fields and oriented receptive fields in simulations of a Hebb-type equation in a linear network. The dynamics of the learning rule are analysed in terms of the eigenvectors of the covariance matrix of cell activities. Analytic and computational results for Linsker's covariance matrices, and some general theorems, lead to an explanation of the emergence of centre---surround and certain oriented structures. Linsker [Linsker, 1986, Linsker, 1988] has studied by simulation the evolution of weight vectors under a Hebb-type teacherless learning rule in a feed-forward linear network. The equation for the evolution of the weight vector w of a single neuron, derived by ensemble averaging the Hebbian rule over the statistics of the input patterns, is:!


A self-organizing multiple-view representation of 3D objects

Neural Information Processing Systems

We demonstrate the ability of a two-layer network of thresholded summation units to support representation of 3D objects in which several distinct 2D views are stored for ea.ch object. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects.