Country
The CHIR Algorithm for Feed Forward Networks with Binary Weights
A new learning algorithm, Learning by Choice of Internal Represetations (CHIR), was recently introduced. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applies a search procedure in the space of internal representations, and a cooperative adaptation of the weights (e.g. by using the perceptron learning rule). Since the introduction of its basic, single output version, the CHIR algorithm was generalized to train any feed forward network of binary neurons. Here we present the generalised version of the CHIR algorithm, and further demonstrate its versatility by describing how it can be modified in order to train networks with binary ( 1) weights. Preliminary tests of this binary version on the random teacher problem are also reported.
Full-Sized Knowledge-Based Systems Research Workshop
Silverman, Barry G., Murray, Arthur J.
The Full-Sized Knowledge-Based Systems Research Workshop was held May 7-8, 1990 in Washington, D.C., as part of the AI Systems in Government Conference sponsored by IEEE Computer Society, Mitre Corporation and George Washington University in cooperation with AAAI. The goal of the workshop was to convene an international group of researchers and practitioners to share insights into the problems of building and deploying Full-Sized Knowledge Based Systems (FSKBSs).
A Reconfigurable Analog VLSI Neural Network Chip
Satyanarayana, Srinagesh, Tsividis, Yannis P., Graf, Hans Peter
The distributed-neuron synapses are arranged in blocks of 16, which we call '4 x 4 tiles'. Switch matrices are interleaved between each of these tiles to provide programmability of interconnections. With a small area overhead (15 %), the 1024 units of the network can be rearranged in various configurations. Some of the possible configurations are, a 12-32-12 network, a 16-12-12-16 network, two 12-32 networks etc. (the numbers separated by dashes indicate the number of units per layer, including the input layer). Weights are stored in analog form on MaS capacitors.
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499
Neural Implementation of Motivated Behavior: Feeding in an Artificial Insect
Beer, Randall D., Chiel, Hillel J.
Most complex behaviors appear to be governed by internal motivational states or drives that modify an animal's responses to its environment. It is therefore of considerable interest to understand the neural basis of these motivational states. Drawing upon work on the neural basis of feeding in the marine mollusc Aplysia, we have developed a heterogeneous artificial neural network for controlling the feeding behavior of a simulated insect. We demonstrate that feeding in this artificial insect shares many characteristics with the motivated behavior of natural animals. 1 INTRODUCTION While an animal's external environment certainly plays an extremely important role in shaping its actions, the behavior of even simpler animals is by no means solely reactive. The response of an animal to food, for example, cannot be explained only in terms of the physical stimuli involved. On two different occasions, the very same animal may behave in completely different ways when presented with seemingly identical pieces of food (e.g.
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
DeWeerth, Stephen P., Mead, Carver
The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images during rapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated (VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.
Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach
Zak, Michail, Toomarian, Nikzad Benny
A new concept for unsupervised learning based upon examples introduced to the neural network is proposed. Each example is considered as an interpolation node of the velocity field in the phase space. The velocities at these nodes are selected such that all the streamlines converge to an attracting set imbedded in the subspace occupied by the cluster of examples. The synaptic interconnections are found from learning procedure providing selected field. The theory is illustrated by examples. This paper is devoted to development of a new concept for unsupervised learning based upon examples introduced to an artificial neural network.
Designing Application-Specific Neural Networks Using the Genetic Algorithm
Harp, Steven A., Samad, Tariq, Guha, Aloke
With the growing interest in the practical use of neural networks, addressing the problem of customiling networks for specific applications is becoming increasingly critical. It has repeatedly been observed that different network structures and learning parameters can substantially affect performance.