Goto

Collaborating Authors

 Technology


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.


The CHIR Algorithm for Feed Forward Networks with Binary Weights

Neural Information Processing Systems

A new learning algorithm, Learning by Choice of Internal Represetations (CHIR),was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperative adaptation of the weights (e.g. by using the perceptron learning rule). Since the introduction of its basic, single output version, theCHIR algorithm was generalized to train any feed forward network of binary neurons. Here we present the generalised version of the CHIR algorithm, and further demonstrate its versatility by describing how it can be modified in order to train networks with binary ( 1) weights. Preliminary tests of this binary version on the random teacher problem are also reported.


Can Simple Cells Learn Curves? A Hebbian Model in a Structured Environment

Neural Information Processing Systems

In the mammalian visual cortex, orientation-selective'simple cells' which detect straight lines may be adapted to detect curved lines instead. We test a biologically plausible, Hebbian, single-neuron model, which learns oriented receptive fields upon exposure to unstructured (noise)input and maintains orientation selectivity upon exposure to edges or bars of all orientations and positions. This model can also learn arc-shaped receptive fields upon exposure to an environment of only circular rings. Thus, new experiments which try to induce an abnormal (curved) receptive field may provide insightinto the plasticity of simple cells. The model suggests that exposing cells to only a single spatial frequency may induce more striking spatial frequency and orientation dependent effects than heretofore observed.


Generalized Hopfield Networks and Nonlinear Optimization

Neural Information Processing Systems

Purdue University W. Lafayette, IN. 47907 ABSTRACT A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).


An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2

Neural Information Processing Systems

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2


Neurally Inspired Plasticity in Oculomotor Processes

Neural Information Processing Systems

We have constructed a two axis camera positioning system which is roughly analogous to a single human eye. This Artificial-Eye (Aeye) combinesthe signals generated by two rate gyroscopes with motion information extracted from visual analysis to stabilize its camera. This stabilization process is similar to the vestibulo-ocular response (VOR); like the VOR, A-eye learns a system model that can be incrementally modified to adapt to changes in its structure, performance and environment. A-eye is an example of a robust sensory systemthat performs computations that can be of significant use to the designers of mobile robots. 1 Introduction We have constructed an "artificial eye" (A-eye), an autonomous robot that incorporates atwo axis camera positioning system (figure 1). Like a the human oculomotor system, A-eye can estimate the rotation rate of its body with a gyroscope and estimate therotation rate of its "eye" by measuring image slip


Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters

Neural Information Processing Systems

One of the attractions of neural network approaches to pattern recognition is the use of a discrimination-based training method. We show that once we have modified the output layer of a multilayer perceptronto provide mathematically correct probability distributions, andreplaced the usual squared error criterion with a probability-based score, the result is equivalent to Maximum Mutual Informationtraining, which has been used successfully to improve theperformance of hidden Markov models for speech recognition. Ifthe network is specially constructed to perform the recognition computations of a given kind of stochastic model based classifier then we obtain a method for discrimination-based training of the parameters of the models. Examples include an HMM-based word discriminator, which we call an'Alphanet'.


A Neural Network to Detect Homologies in Proteins

Neural Information Processing Systems

Furthemore, sequence similarity often results from common ancestors. Immunoglobulin (Ig) domains are sets of,a-sheets bound 424 Bengio, Bengio, Pouliot and Agin by cysteine bonds and with a characteristic tertiary structure. Such domains are found in many proteins involved in immune, cell adhesion and receptor functions. These proteins collectively form the immunoglobulin superfamily (for review, see Williams and Barclay, 1987). Members of the superfamily often possess several Ig domains.


On the Distribution of the Number of Local Minima of a Random Function on a Graph

Neural Information Processing Systems

Minimization of energy or error functions has proved to be a useful principle in the design and analysis of neural networks and neural algorithms. A brief list of examples include: the back-propagation algorithm, the use of optimization methods in computational vision, the application of analog networks to the approximate solution of NP complete problems and the Hopfield model of associative memory.


Learning Aspect Graph Representations from View Sequences

Neural Information Processing Systems

In our effort to develop a modular neural system for invariant learning andrecognition of 3D objects, we introduce here a new module architecture called an aspect network constructed around adaptive axo-axo-dendritic synapses. This builds upon our existing system (Seibert & Waxman, 1989) which processes 20 shapes and classifies t.hem into view categories (i.e., aspects) invariant to illumination, position, orientat.ion,