Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499
Neural Implementation of Motivated Behavior: Feeding in an Artificial Insect
Beer, Randall D., Chiel, Hillel J.
Most complex behaviors appear to be governed by internal motivational states or drives that modify an animal's responses to its environment. It is therefore of considerable interest to understand the neural basis of these motivational states. Drawing upon work on the neural basis of feeding in the marine mollusc Aplysia, we have developed a heterogeneous artificial neural network for controlling the feeding behavior of a simulated insect. We demonstrate that feeding in this artificial insect shares many characteristics with the motivated behavior of natural animals. 1 INTRODUCTION While an animal's external environment certainly plays an extremely important role in shaping its actions, the behavior of even simpler animals is by no means solely reactive. The response of an animal to food, for example, cannot be explained only in terms of the physical stimuli involved. On two different occasions, the very same animal may behave in completely different ways when presented with seemingly identical pieces of food (e.g.
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
DeWeerth, Stephen P., Mead, Carver
The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images during rapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated (VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.
Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach
Zak, Michail, Toomarian, Nikzad Benny
A new concept for unsupervised learning based upon examples introduced to the neural network is proposed. Each example is considered as an interpolation node of the velocity field in the phase space. The velocities at these nodes are selected such that all the streamlines converge to an attracting set imbedded in the subspace occupied by the cluster of examples. The synaptic interconnections are found from learning procedure providing selected field. The theory is illustrated by examples. This paper is devoted to development of a new concept for unsupervised learning based upon examples introduced to an artificial neural network.
Designing Application-Specific Neural Networks Using the Genetic Algorithm
Harp, Steven A., Samad, Tariq, Guha, Aloke
With the growing interest in the practical use of neural networks, addressing the problem of customiling networks for specific applications is becoming increasingly critical. It has repeatedly been observed that different network structures and learning parameters can substantially affect performance.
Speaker Independent Speech Recognition with Neural Networks and Speech Knowledge
Bengio, Yoshua, Mori, Renato de, Cardin, Rรฉgis
ABSTRACT We attempt to combine neural networks with knowledge from speech science to build a speaker independent speech recognition system. This knowledge is utilized in designing the preprocessing, input coding, output coding, output supervision and architectural constraints. To handle the temporal aspect of speech we combine delays, copies of activations of hidden and output units at the input level, and Back-Propagation for Sequences (BPS), a learning algorithm for networks with local self-loops. This strategy is demonstrated in several experiments, in particular a nasal discrimination task for which the application of a speech theory hypothesis dramatically improved generalization. 1 INTRODUCTION The strategy put forward in this research effort is to combine the flexibility and learning abilities of neural networks with as much knowledge from speech science as possible in order to build a speaker independent automatic speech recognition system. This knowledge is utilized in each of the steps in the construction of an automated speech recognition system: preprocessing, input coding, output coding, output supervision, architectural design.
A Cost Function for Internal Representations
Krogh, Anders, Thorbergsson, C. I., Hertz, John A.
We introduce a cost function for learning in feed-forward neural networks which is an explicit function of the internal representation in addition to the weights. The learning problem can then be formulated as two simple perceptrons and a search for internal representations. Back-propagation is recovered as a limit. The frequency of successful solutions is better for this algorithm than for back-propagation when weights and hidden units are updated on the same timescale i.e. once every learning step. 1 INTRODUCTION In their review of back-propagation in layered networks, Rumelhart et al. (1986) describe the learning process in terms of finding good "internal representations" of the input patterns on the hidden units. However, the search for these representations is an indirect one, since the variables which are adjusted in its course are the connection weights, not the activations of the hidden units themselves when specific input patterns are fed into the input layer. Rather, the internal representations are represented implicitly in the connection weight values. More recently, Grossman et al. (1988 and 1989)1 suggested a way in which the search for internal representations could be made much more explicit.
Generalization and Scaling in Reinforcement Learning
Ackley, David H., Littman, Michael L.
In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement function computes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation (CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization.