Goto

Collaborating Authors

 Information Technology


A Programmable Analog Neural Computer and Simulator

Neural Information Processing Systems

ABSTRACT This report describes the design of a programmable general purpose analog neural computer and simulator. It is intended primarily for real-world real-time computations such as analysis of visual or acoustical patterns, robotics and the development of special purpose neural nets. The machine is scalable and composed of interconnected modules containing arrays of neurons, modifiable synapses and switches. It runs entirely in analog mode but connection architecture, synaptic gains and time constants as well as neuron parameters are set digitally. Each neuron has a limited number of inputs and can be connected to any but not all other neurons.


Speech Production Using A Neural Network with a Cooperative Learning Mechanism

Neural Information Processing Systems

We propose a new neural network model and its learning algorithm. The proposed neural network consists of four layers - input, hidden, output and final output layers. The hidden and output layers are multiple. Using the proposed SICL(Spread Pattern Information and Cooperative Learning) algorithm, it is possible to learn analog data accurately and to obtain smooth outputs. Using this neural network, we have developed a speech production system consisting of a phonemic symbol production subsystem and a speech parameter production subsystem. We have succeeded in producing natural speech waves with high accuracy.


Computer Modeling of Associative Learning

Neural Information Processing Systems

This paper describes an ongoing effort which approaches neural net research in a program of close collaboration of neurosc i ent i sts and eng i neers. The effort is des i gned to elucidate associative learning in the marine snail Hermissenda crassicornist in which Pavlovian conditioning has been observed. Learning has been isolated in the four neuron network at the convergence of the v i sua 1 and vestibular pathways in this animal t and biophysical changes t specific to learning t have been observed in the membrane of the photoreceptor B cell. A basic charging capacitance model of a neuron is used and enhanced with biologically plausible mechanisms that are necessary to replicate the effect of learning at the cellular level. These mechanisms are nonlinear and are t primarilYt instances of second order control systems (e.g.


A Self-Learning Neural Network

Neural Information Processing Systems

We propose a new neural network structure that is compatible with silicon technology and has built-in learning capability. The thrust of this network work is a new synapse function. The synapses have the feature that the learning parameter is embodied inthe thresholds of MOSFET devices and is local in character. Thenetwork is shown to be capable of learning by example as well as exhibiting the desirable features of the Hopfield type networks. The thrust of what we want to discuss is a new synapse function for an artificial neuron to be used in a neural network.



GEMINI: Gradient Estimation Through Matrix Inversion After Noise Injection

Neural Information Processing Systems

Learning procedures that measure how random perturbations of unit activities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities affect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforcement procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing unknown non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI.


Does the Neuron "Learn" like the Synapse?

Neural Information Processing Systems

An improved learning paradigm that offers a significant reduction in computation time during the supervised learning phase is described. It is based on extending the role that the neuron plays in artificial neural systems. Prior work has regarded the neuron as a strictly passive, nonlinear processing element, and the synapse on the other hand as the primary source of information processing and knowledge retention. In this work, the role of the neuron is extended insofar as allowing its parameters to adaptively participate in the learning phase. The temperature of the sigmoid function is an example of such a parameter.


Applications of Error Back-Propagation to Phonetic Classification

Neural Information Processing Systems

This paper is concerced with the use of error back-propagation in phonetic classification. Our objective is to investigate the basic characteristics of back-propagation, and study how the framework of multi-layer perceptrons can be exploited in phonetic recognition.


Performance of Synthetic Neural Network Classification of Noisy Radar Signals

Neural Information Processing Systems

This study evaluates the performance of the multilayer-perceptron and the frequency-sensitive competitive learning network in identifying five commercial aircraft from radar backscatter measurements. The performance of the neural network classifiers is compared with that of the nearest-neighbor and maximum-likelihood classifiers. Our results indicate that for this problem, the neural network classifiers are relatively insensitive to changes in the network topology, and to the noise level in the training data. While, for this problem, the traditional algorithms outperform these simple neural classifiers, we feel that neural networks show the potential for improved performance.


Performance of a Stochastic Learning Microchip

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.