Plotting

Recognition of Manipulated Objects by Motor Learning

Neural Information Processing Systems

We present two neural network controller learning schemes based on feedbackerror-learning and modular architecture for recognition and control of multiple manipulated objects. In the first scheme, a Gating Network is trained to acquire object-specific representations for recognition of a number of objects (or sets of objects). In the second scheme, an Estimation Network is trained to acquire function-specific, rather than object-specific, representations which directly estimate physical parameters. Both recognition networks are trained to identify manipulated objects using somatic and/or visual information. After learning, appropriate motor commands for manipulation of each object are issued by the control networks.


Experimental Evaluation of Learning in a Neural Microsystem

Neural Information Processing Systems

Joshua Alspector Anthony Jayakumar Stephan Lunat Bellcore Morristown, NJ 07962-1910 Abstract We report learning measurements from a system composed of a cascadable learning chip, data generators and analyzers for training pattern presentation, and an X-windows based software interface. The 32 neuron learning chip has 496 adaptive synapses and can perform Boltzmann and mean-field learning using separate noise and gain controls.


Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks

Neural Information Processing Systems

The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the online requirement in many practical applications. Although the forward propagation algorithm can be used in an online manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix (0( fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task.


Gradient Descent: Second Order Momentum and Saturating Error

Neural Information Processing Systems

We then regard gradient descent with momentum as a dynamic system and explore a non quadratic error surface, showing that saturation of the error accounts for a variety of effects observed in simulations and justifies some popular heuristics. 1 INTRODUCTION Gradient descent is the bread-and-butter optimization technique in neural networks. Some people build special purpose hardware to accelerate gradient descent optimization of backpropagation networks. Understanding the dynamics of gradient descent on such surfaces is therefore of great practical value. Here we briefly review the known results in the convergence of batch gradient descent; show that second-order momentum does not give any speedup; simulate a real network and observe some effect not predicted by theory; and account for these effects by analyzing gradient descent with momentum on a saturating error surface.


A Neurocomputer Board Based on the ANNA Neural Network Chip

Neural Information Processing Systems

Many researchers have built neural-network chips, but few chips have been installed in board-level systems, even though this next level of integration provides insights and advantages that can't be attained on a chip testing station. Building a board demonstrates whether or not the chip can be effectively integrated into the larger systems required for real applications.


Induction of Finite-State Automata Using Second-Order Recurrent Networks

Neural Information Processing Systems

By a method of heuristic search over the space of finite state automata with up to eight states, he was able to induce a recognizer for each of these languages (Tomita, 1982). Recognizers of finite-state languages have also been induced using first-order recurrent connectionist networks (Elman, 1990; Williams and Zipser, 1988; Cleeremans, Servan-Schreiber and McClelland, 1989). Generally speaking, these results were obtained by training the network to predict the next symbol (Cleeremans, Servan-Schreiber and McClelland, 1989; Williams and Zipser, 1988), rather than by training the network to accept or reject strings of different.lengths. Several training algorithms used an approximation to the gradient (Elman, 1990; Cleeremans, Servan-Schreiber and McClelland, 1989) by truncating the computation of the backward recurrence. The problem of inducing languages from examples has also been approached using second-order recurrent networks (Pollack, 1990; Giles et al., 1990). Using a truncated approximation to the gradient, and Tomita's training sets, Pollack reported that "none of the ideal languages were induced" (Pollack, 1990). On the other hand, a Tomita language has been induced using the complete gradient (Giles et al., 1991). This paper reports the induction of several Tomita languages and the extraction of the corresponding automata with certain differences in method from (Giles et al., 1991).


Neural Network - Gaussian Mixture Hybrid for Speech Recognition or Density Estimation

Neural Information Processing Systems

The subject of this paper is the integration of multi-layered Artificial Neural Networks (ANN) with probability density functions such as Gaussian mixtures found in continuous density Hidden Markov Models (HMM). In the first part of this paper we present an ANN/HMM hybrid in which all the parameters of the the system are simultaneously optimized with respect to a single criterion. In the second part of this paper, we study the relationship between the density of the inputs of the network and the density of the outputs of the networks. A few experiments are presented to explore how to perform density estimation with ANNs. 1 INTRODUCTION This paper studies the integration of Artificial Neural Networks (ANN) with probability density functions (pdf) such as the Gaussian mixtures often used in continuous density Hidden Markov Models. The ANNs considered here are multi-layered or recurrent networks with hyperbolic tangent hidden units.


Software for ANN training on a Ring Array Processor

Neural Information Processing Systems

Experimental research on Artificial Neural Network (ANN) algorithms requires either writing variations on the same program or making one monolithic program with many parameters and options. By using an object-oriented library, the size of these experimental programs is reduced while making them easier to read, write and modify. An efficient and flexible realization of this idea is Connectionist Layered Object-oriented Network Simulator (CLONES).


Fault Diagnosis of Antenna Pointing Systems using Hybrid Neural Network and Signal Processing Models

Neural Information Processing Systems

We describe in this paper a novel application of neural networks to system health monitoring of a large antenna for deep space communications. The paper outlines our approach to building a monitoring system using hybrid signal processing and neural network techniques, including autoregressive modelling, pattern recognition, and Hidden Markov models. We discuss several problems which are somewhat generic in applications of this kind - in particular we address the problem of detecting classes which were not present in the training data. Experimental results indicate that the proposed system is sufficiently reliable for practical implementation. 1 Background: The Deep Space Network The Deep Space Network (DSN) (designed and operated by the Jet Propulsion Laboratory (JPL) for the National Aeronautics and Space Administration (NASA)) is unique in terms of providing end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system. The ground component of the DSN consists of three ground station complexes located in California, Spain and Australia, giving full 24-hour coverage for deep space communications.