Not enough data to create a plot.
Try a different view from the menu above.
Time Dependent Adaptive Neural Networks
Fernando J. Pineda Center for Microelectronics Technology Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 ABSTRACT A comparison of algorithms that minimize error functions to train the trajectories of recurrent networks, reveals how complexity is traded off for causality. These algorithms are also related to time-independent fonnalisms. It is suggested that causal and scalable algorithms are possible when the activation dynamics of adaptive neurons is fast compared to the behavior to be learned. Standard continuous-time recurrent backpropagation is used in an example. 1 INTRODUCTION Training the time dependent behavior of a neural network model involves the minimization of a function that measures the difference between an actual trajectory and a desired trajectory. The standard method of accomplishing this minimization is to calculate the gradient of an error function with respect to the weights of the system and then to use the gradient in a minimization algorithm (e.g.
Non-Boltzmann Dynamics in Networks of Spiking Neurons
Crair, Michael C., Bialek, William
We study networks of spiking neurons in which spikes are fired as a Poisson process. The state of a cell is determined by the instantaneous firing rate, and in the limit of high firing rates our model reduces to that studied by Hopfield. We find that the inclusion of spiking results in several new features, such as a noise-induced asymmetry between "on" and "off" states of the cells and probability currents which destroy the usual description of network dynamics in terms of energy surfaces. Taking account of spikes also allows us to calibrate network parameters such as "synaptic weights" against experiments on real synapses. Realistic forms of the post synaptic response alters the network dynamics, which suggests a novel dynamical learning mechanism.
Combining Visual and Acoustic Speech Signals with a Neural Network Improves Intelligibility
Sejnowski, Terrence J., Yuhas, Ben P., Jr., Moise H. Goldstein, Jenkins, Robert E.
Compensatory information is available from the visual speech signals around the speaker's mouth. Previous attempts at using these visual speech signals to improve automatic speech recognition systems have combined the acoustic and visual speech information at a symbolic level using heuristic rules. In this paper, we demonstrate an alternative approach to fusing the visual and acoustic speech information by training feedforward neural networks to map the visual signal onto the corresponding short-term spectral amplitude envelope (STSAE) of the acoustic signal. This information can be directly combined with the degraded acoustic STSAE. Significant improvements are demonstrated in vowel recognition from noise-degraded acoustic signals. These results are compared to the performance of humans, as well as other pattern matching and estimation algorithms. 1 INTRODUCTION
Neural Networks: The Early Days
A short account is given of various investigations of neural network properties, beginning with the classic work of McCulloch & Pitts. Early work on neurodynamics and statistical mechanics, analogies with magnetic materials, fault tolerance via parallel distributed processing, memory, learning, and pattern recognition, is described.
A Systematic Study of the Input/Output Properties of a 2 Compartment Model Neuron With Active Membranes
The input/output properties of a 2 compartment model neuron are systematically explored. Taken from the work of MacGregor (MacGregor, 1987), the model neuron compartments contain several active conductances, including a potassium conductance in the dendritic compartment driven by the accumulation of intradendritic calcium. Dynamics of the conductances and potentials are governed by a set of coupled first order differential equations which are integrated numerically. There are a set of 17 internal parameters to this model, specificying conductance rate constants, time constants, thresholds, etc. To study parameter sensitivity, a set of trials were run in which the input driving the neuron is kept fixed while each internal parameter is varied with all others left fixed. To study the input/output relation, the input to the dendrite (a square wave) was varied (in frequency and magnitude) while all internal parameters of the system were left flXed, and the resulting output firing rate and bursting rate was counted. The input/output relation of the model neuron studied turns out to be much more sensitive to modulation of certain dendritic potassium current parameters than to plasticity of synapse efficacy per se (the amount of current influx due to synapse activation). This would in turn suggest, as has been recently observed experimentally, that the potassium current may be as or more important a focus of neural plasticity than synaptic efficacy.
A Method for the Associative Storage of Analog Vectors
Atiya, Amir F., Abu-Mostafa, Yaser S.
A method for storing analog vectors in Hopfield's continuous feedback model is proposed. By analog vectors we mean vectors whose components are real-valued. The vectors to be stored are set as equilibria of the network. The network model consists of one layer of visible neurons and one layer of hidden neurons. We propose a learning algorithm, which results in adjusting the positions of the equilibria, as well as guaranteeing their stability.
Incremental Parsing by Modular Recurrent Connectionist Networks
We present a novel, modular, recurrent connectionist network architecture which learns to robustly perform incremental parsing of complex sentences. From sequential input, one word at a time, our networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. Our networks induce their own "grammar rules" for dynamically transforming an input sequence of words into a syntactic/semantic interpretation. These networks generalize and display tolerance to input which has been corrupted in ways common in spoken language.
On the Distribution of the Number of Local Minima of a Random Function on a Graph
Baldi, Pierre, Rinott, Yosef, Stein, Charles
Minimization of energy or error functions has proved to be a useful principle in the design and analysis of neural networks and neural algorithms. A brief list of examples include: the back-propagation algorithm, the use of optimization methods in computational vision, the application of analog networks to the approximate solution of NP complete problems and the Hopfield model of associative memory.
Neural Network Weight Matrix Synthesis Using Optimal Control Techniques
Farotimi, O., Dembo, Amir, Kailath, Thomas
Given a set of input-output training samples, we describe a procedure for determining the time sequence of weights for a dynamic neural network to model an arbitrary input-output process. We formulate the input-output mapping problem as an optimal control problem, defining a performance index to be minimized as a function of time-varying weights. We solve the resulting nonlinear two-point-boundary-value problem, and this yields the training rule. For the performance index chosen, this rule turns out to be a continuous time generalization of the outer product rule earlier suggested heuristically by Hopfield for designing associative memories. Learning curves for the new technique are presented.