Not enough data to create a plot.
Try a different view from the menu above.
Industry
Neural Implementation of Motivated Behavior: Feeding in an Artificial Insect
Beer, Randall D., Chiel, Hillel J.
Most complex behaviors appear to be governed by internal motivational states or drives that modify an animal's responses to its environment. It is therefore of considerable interest to understand the neural basis of these motivational states. Drawing upon work on the neural basis of feeding in the marine mollusc Aplysia, we have developed a heterogeneous artificial neural network for controlling the feeding behavior of a simulated insect. We demonstrate that feeding in this artificial insect shares many characteristics with the motivated behavior of natural animals. 1 INTRODUCTION While an animal's external environment certainly plays an extremely important role in shaping its actions, the behavior of even simpler animals is by no means solely reactive. The response of an animal to food, for example, cannot be explained only in terms of the physical stimuli involved. On two different occasions, the very same animal may behave in completely different ways when presented with seemingly identical pieces of food (e.g.
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
DeWeerth, Stephen P., Mead, Carver
The vestibulo-ocular reflex (VOR) is the primary mechanism that controls the compensatory eye movements that stabilize retinal images during rapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and outputs to the oculomotor system. Since visual feedback is not used directly in the VOR computation, the system must exploit motor learning to perform correctly. Lisberger(1988) has proposed a model for adapting the VOR gain using image-slip information from the retina. We have designed and tested analog very largescale integrated (VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.
Unsupervised Learning in Neurodynamics Using the Phase Velocity Field Approach
Zak, Michail, Toomarian, Nikzad Benny
A new concept for unsupervised learning based upon examples introduced to the neural network is proposed. Each example is considered as an interpolation node of the velocity field in the phase space. The velocities at these nodes are selected such that all the streamlines converge to an attracting set imbedded in the subspace occupied by the cluster of examples. The synaptic interconnections are found from learning procedure providing selected field. The theory is illustrated by examples. This paper is devoted to development of a new concept for unsupervised learning based upon examples introduced to an artificial neural network.
Neurally Inspired Plasticity in Oculomotor Processes
We have constructed a two axis camera positioning system which is roughly analogous to a single human eye. This Artificial-Eye (Aeye) combines the signals generated by two rate gyroscopes with motion information extracted from visual analysis to stabilize its camera. This stabilization process is similar to the vestibulo-ocular response (VOR); like the VOR, A-eye learns a system model that can be incrementally modified to adapt to changes in its structure, performance and environment. A-eye is an example of a robust sensory system that performs computations that can be of significant use to the designers of mobile robots. 1 Introduction We have constructed an "artificial eye" (A-eye), an autonomous robot that incorporates a two axis camera positioning system (figure 1). Like a the human oculomotor system, A-eye can estimate the rotation rate of its body with a gyroscope and estimate the rotation rate of its "eye" by measuring image slip
Computer Simulation of Oscillatory Behavior in Cerebral Cortical Networks
Wilson, Matthew A., Bower, James M.
It has been known for many years that specific regions of the working cerebral cortex display periodic variations in correlated cellular activity. While the olfactory system has been the focus of much of this work, similar behavior has recently been observed in primary visual cortex. We have developed models of both the olfactory and visual cortex which replicate the observed oscillatory properties of these networks. Using these models we have examined the dependence of oscillatory behavior on single cell properties and network architectures. We discuss the idea that the oscillatory events recorded from cerebral cortex may be intrinsic to the architecture of cerebral cortex as a whole, and that these rhythmic patterns may be important in coordinating neuronal activity during sensory processmg.
Note on Development of Modularity in Simple Cortical Models
Chernajvsky, Alex, Moody, John E.
We show that localized activity patterns in a layer of cells, collective excitations, can induce the formation of modular structures in the anatomical connections via a Hebbian learning mechanism. The networks are spatially homogeneous before learning, but the spontaneous emergence of localized collective excitations and subsequently modularity in the connection patterns breaks translational symmetry. This spontaneous symmetry breaking phenomenon is similar to those which drive pattern formation in reaction-diffusion systems. We have identified requirements on the patterns of lateral connections and on the gains of internal units which are essential for the development of modularity. These essential requirements will most likely remain operative when more complicated (and biologically realistic) models are considered.
Effects of Firing Synchrony on Signal Propagation in Layered Networks
Kenyon, G. T., Fetz, Eberhard E., Puff, R. D.
Spiking neurons which integrate to threshold and fire were used to study the transmission of frequency modulated (FM) signals through layered networks. Firing correlations between cells in the input layer were found to modulate the transmission of FM signals under certain dynamical conditions. A tonic level of activity was maintained by providing each cell with a source of Poissondistributed synaptic input. When the average membrane depolarization produced by the synaptic input was sufficiently below threshold, the firing correlations between cells in the input layer could greatly amplify the signal present in subsequent layers. When the depolarization was sufficiently close to threshold, however, the firing synchrony between cells in the initial layers could no longer effect the propagation of FM signals. In this latter case, integrateand-fire neurons could be effectively modeled by simpler analog elements governed by a linear input-output relation.
Optimal Brain Damage
LeCun, Yann, Denker, John S., Solla, Sara A.
We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application. 1 INTRODUCTION Most successful applications of neural network learning to real-world problems have been achieved using highly structured networks of rather large size [for example (Waibel, 1989; Le Cun et al., 1990a)]. As applications become more complex, the networks will presumably become even larger and more structured.
Time Dependent Adaptive Neural Networks
Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.
Non-Boltzmann Dynamics in Networks of Spiking Neurons
Crair, Michael C., Bialek, William
We study networks of spiking neurons in which spikes are fired as a Poisson process. The state of a cell is determined by the instantaneous firing rate, and in the limit of high firing rates our model reduces to that studied by Hopfield. We find that the inclusion of spiking results in several new features, such as a noise-induced asymmetry between "on" and "off" states of the cells and probability currents which destroy the usual description of network dynamics in terms of energy surfaces. Taking account of spikes also allows us to calibrate network parameters such as "synaptic weights" against experiments on real synapses. Realistic forms of the post synaptic response alters the network dynamics, which suggests a novel dynamical learning mechanism.