Plotting

 Information Technology


Performance of a Stochastic Learning Microchip

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.



A Self-Learning Neural Network

Neural Information Processing Systems

We propose a new neural network structure that is compatible with silicon technology and has built-in learning capability. The thrust of this network work is a new synapse function. The synapses have the feature that the learning parameter is embodied in the thresholds of MOSFET devices and is local in character. The network is shown to be capable of learning by example as well as exhibiting the desirable features of the Hopfield type networks. The thrust of what we want to discuss is a new synapse function for an artificial neuron to be used in a neural network.


A Bifurcation Theory Approach to the Programming of Periodic Attractors in Network Models of Olfactory Cortex

Neural Information Processing Systems

Bill Baird Department of Biophysics U.C. Berkeley ABSTRACT A new learning algorithm for the storage of static and periodic attractors in biologically inspired recurrent analog neural networks is introduced. For a network of n nodes, n static or n/2 periodic attractors may be stored. The algorithm allows programming of the network vector field independent of the patterns to be stored. Stability of patterns, basin geometry, and rates of convergence may be controlled. Standing or traveling wave cycles may be stored to mimic the kind of oscillating spatial patterns that appear in the neural activity of the olfactory bulb and prepyriform cortex during inspiration and suffice, in the bulb, to predict the pattern recognition behavior of rabbits in classical conditioning experiments.


Implications of Recursive Distributed Representations

Neural Information Processing Systems

I will describe my recent results on the automatic development of fixedwidth recursive distributed representations of variable-sized hierarchal data structures. One implication of this wolk is that certain types of AIstyle data-structures can now be represented in fixed-width analog vectors. Simple inferences can be perfonned using the type of pattern associations that neural networks excel at Another implication arises from noting that these representations become self-similar in the limit Once this door to chaos is opened.


Performance of a Stochastic Learning Microchip

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.


ALVINN: An Autonomous Land Vehicle in a Neural Network

Neural Information Processing Systems

ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.


Training a Limited-Interconnect, Synthetic Neural IC

Neural Information Processing Systems

Hardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity. Functionally equivalent feedforward networks may be formed by using limited fan-in nodes and additional layers.


Modeling Small Oscillating Biological Networks in Analog VLSI

Neural Information Processing Systems

We have used analog VLSI technology to model a class of small oscillating biological neural circuits known as central pattern generators (CPG). These circuits generate rhythmic patterns of activity which drive locomotor behaviour in the animal. We have designed, fabricated, and tested a model neuron circuit which relies on many of the same mechanisms as a biological central pattern generator neuron, such as delays and internal feedback. We show that this neuron can be used to build several small circuits based on known biological CPG circuits, and that these circuits produce patterns of output which are very similar to the observed biological patterns. To date, researchers in applied neural networks have tended to focus on mammalian systems as the primary source of potentially useful biological information. However, invertebrate systems may represent a source of ideas in many ways more appropriate, given current levels of engineering sophistication in building neural-like systems, and given the state of biological understanding of mammalian circuits.


Analyzing the Energy Landscapes of Distributed Winner-Take-All Networks

Neural Information Processing Systems

DCPS (the Distributed Connectionist Production System) is a neural network with complex dynamical properties. Visualizing the energy landscapes of some of its component modules leads to a better intuitive understanding of the model, and suggests ways in which its dynamics can be controlled in order to improve performance on difficult cases.