Goto

Collaborating Authors

 Technology


Neural Network Visualization

Neural Information Processing Systems

We have developed graphics to visualize static and dynamic information inlayered neural network learning systems. Emphasis was placed on creating new visuals that make use of spatial arrangements, sizeinformation, animation and color. We applied these tools to the study of back-propagation learning of simple Boolean predicates, and have obtained new insights into the dynamics of the learning process.


Sequential Decision Problems and Neural Networks

Neural Information Processing Systems

Decision making tasks that involve delayed consequences are very common yet difficult to address with supervised learning methods. If there is an accurate model of the underlying dynamical system, then these tasks can be formulated as sequential decision problems and solved by Dynamic Programming. This paper discusses reinforcement learningin terms of the sequential decision framework and shows how a learning algorithm similar to the one implemented by the Adaptive Critic Element used in the pole-balancer of Barto, Sutton, and Anderson (1983), and further developed by Sutton (1984), fits into this framework. Adaptive neural networks can play significant roles as modules for approximating the functions required for solving sequential decision problems.


Bayesian Inference of Regular Grammar and Markov Source Models

Neural Information Processing Systems

In this paper we develop a Bayes criterion which includes the Rissanen complexity, for inferring regular grammar models. We develop two methods for regular grammar Bayesian inference. The fIrst method is based on treating the regular grammar as a I-dimensional Markov source, and the second is based on the combinatoric characteristics of the regular grammar itself. We apply the resulting Bayes criteria to a particular example in order to show the efficiency of each method.


Pulse-Firing Neural Chips for Hundreds of Neurons

Neural Information Processing Systems

U niv. of Edinburgh ABSTRACT We announce new CMOS synapse circuits using only three and four MOSFETsisynapse. Neural states are asynchronous pulse streams, upon which arithmetic is performed directly. Chips implementing over 100 fully programmable synapses are described and projections to networks of hundreds of neurons are made. 1 OVERVIEW OF PULSE FIRING NEURAL VLSI The inspiration for the use of pulse firing in silicon neural networks is clearly the electrical/chemical pulse mechanism in "real" biological neurons. Neurons fire voltage pulses of a frequency determined by their level of activity but of a constant magnitude (usually 5 Volts) [Murray,1989a]. As indicated in Figure 1, synapses perform arithmetic directly on these asynchronous pulses, to increment or decrement the receiving neuron's activity.


The CHIR Algorithm for Feed Forward Networks with Binary Weights

Neural Information Processing Systems

A new learning algorithm, Learning by Choice of Internal Represetations (CHIR),was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperative adaptation of the weights (e.g. by using the perceptron learning rule). Since the introduction of its basic, single output version, theCHIR algorithm was generalized to train any feed forward network of binary neurons. Here we present the generalised version of the CHIR algorithm, and further demonstrate its versatility by describing how it can be modified in order to train networks with binary ( 1) weights. Preliminary tests of this binary version on the random teacher problem are also reported.


Can Simple Cells Learn Curves? A Hebbian Model in a Structured Environment

Neural Information Processing Systems

In the mammalian visual cortex, orientation-selective'simple cells' which detect straight lines may be adapted to detect curved lines instead. We test a biologically plausible, Hebbian, single-neuron model, which learns oriented receptive fields upon exposure to unstructured (noise)input and maintains orientation selectivity upon exposure to edges or bars of all orientations and positions. This model can also learn arc-shaped receptive fields upon exposure to an environment of only circular rings. Thus, new experiments which try to induce an abnormal (curved) receptive field may provide insightinto the plasticity of simple cells. The model suggests that exposing cells to only a single spatial frequency may induce more striking spatial frequency and orientation dependent effects than heretofore observed.


Generalized Hopfield Networks and Nonlinear Optimization

Neural Information Processing Systems

Purdue University W. Lafayette, IN. 47907 ABSTRACT A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).


An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2

Neural Information Processing Systems

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2


Neurally Inspired Plasticity in Oculomotor Processes

Neural Information Processing Systems

We have constructed a two axis camera positioning system which is roughly analogous to a single human eye. This Artificial-Eye (Aeye) combinesthe signals generated by two rate gyroscopes with motion information extracted from visual analysis to stabilize its camera. This stabilization process is similar to the vestibulo-ocular response (VOR); like the VOR, A-eye learns a system model that can be incrementally modified to adapt to changes in its structure, performance and environment. A-eye is an example of a robust sensory systemthat performs computations that can be of significant use to the designers of mobile robots. 1 Introduction We have constructed an "artificial eye" (A-eye), an autonomous robot that incorporates atwo axis camera positioning system (figure 1). Like a the human oculomotor system, A-eye can estimate the rotation rate of its body with a gyroscope and estimate therotation rate of its "eye" by measuring image slip


Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters

Neural Information Processing Systems

One of the attractions of neural network approaches to pattern recognition is the use of a discrimination-based training method. We show that once we have modified the output layer of a multilayer perceptronto provide mathematically correct probability distributions, andreplaced the usual squared error criterion with a probability-based score, the result is equivalent to Maximum Mutual Informationtraining, which has been used successfully to improve theperformance of hidden Markov models for speech recognition. Ifthe network is specially constructed to perform the recognition computations of a given kind of stochastic model based classifier then we obtain a method for discrimination-based training of the parameters of the models. Examples include an HMM-based word discriminator, which we call an'Alphanet'.