Industry
Performance of a Stochastic Learning Microchip
Alspector, Joshua, Gupta, Bhusan, Allen, Robert B.
We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.
Performance of a Stochastic Learning Microchip
Alspector, Joshua, Gupta, Bhusan, Allen, Robert B.
We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.
ALVINN: An Autonomous Land Vehicle in a Neural Network
ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.
Modeling Small Oscillating Biological Networks in Analog VLSI
Ryckebusch, Sylvie, Bower, James M., Mead, Carver
We have used analog VLSI technology to model a class of small oscillating biological neural circuits known as central pattern generators (CPG). These circuits generate rhythmic patterns of activity which drive locomotor behaviour in the animal. We have designed, fabricated, and tested a model neuron circuit which relies on many of the same mechanisms as a biological central pattern generator neuron, such as delays and internal feedback. We show that this neuron can be used to build several small circuits based on known biological CPG circuits, and that these circuits produce patterns of output which are very similar to the observed biological patterns. To date, researchers in applied neural networks have tended to focus on mammalian systems as the primary source of potentially useful biological information. However, invertebrate systems may represent a source of ideas in many ways more appropriate, given current levels of engineering sophistication in building neural-like systems, and given the state of biological understanding of mammalian circuits.
Neural Networks for Model Matching and Perceptual Organization
Mjolsness, Eric, Gindi, Gene, Anandan, P.
We introduce an optimization approach for solving problems in computer vision that involve multiple levels of abstraction. Our objective functions include compositional and specialization hierarchies. We cast vision problems as inexact graph matching problems, formulate graph matching in terms of constrained optimization, and use analog neural networks to perform the optimization. The method is applicable to perceptual grouping and model matching. Preliminary experimental results are shown.
Learning the Solution to the Aperture Problem for Pattern Motion with a Hebb Rule
The primate visual system learns to recognize the true direction of pattern motion using local detectors only capable of detecting the component of motion perpendicular to the orientation of the moving edge. A multilayer feedforward network model similar to Linsker's model was presented with input patterns each consisting of randomly oriented contours moving in a particular direction. Input layer units are granted component direction and speed tuning curves similar to those recorded from neurons in primate visual area VI that project to area MT. The network is trained on many such patterns until most weights saturate. A proportion of the units in the second layer solve the aperture problem (e.g., show the same direction-tuning curve peak to plaids as to gratings), resembling pattern-direction selective neurons, which ftrst appear inareaMT.
An Information Theoretic Approach to Rule-Based Connectionist Expert Systems
Goodman, Rodney M., Miller, John W., Smyth, Padhraic
We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.
Models of Ocular Dominance Column Formation: Analytical and Computational Results
Miller, Kenneth D., Keller, Joseph B., Stryker, Michael P.
In the developing visual system in many mammalian species, there is initially a uniform, overlapping innervation of layer 4 of the visual cortex by inputs representing the two eyes. Subsequently, these inputs segregate into patches or stripes that are largely or exclusively innervated by inputs serving a single eye, known as ocular dominance patches. The ocular dominance patches are on a small scale compared to the map of the visual world, so that the initially continuous map becomes two interdigitated maps, one representing each eye. These patches, together with the layers of cortex above and below layer 4, whose responses are dominated by the eye innervating the corresponding layer 4 patch, are known as ocular dominance columns.
Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus
Stanton, Patric K., Sejnowski, Terrence J.
We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or L TP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied the associative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative L TP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarization sufficient to block postsynaptic activity.