Energy
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499
Effects of Firing Synchrony on Signal Propagation in Layered Networks
Kenyon, G. T., Fetz, Eberhard E., Puff, R. D.
Spiking neurons which integrate to threshold and fire were used to study the transmission of frequency modulated (FM) signals through layered networks. Firing correlations between cells in the input layer were found to modulate the transmission of FM signals under certain dynamical conditions. A tonic level of activity was maintained by providing each cell with a source of Poissondistributed synaptic input. When the average membrane depolarization produced by the synaptic input was sufficiently below threshold, the firing correlations between cells in the input layer could greatly amplify the signal present in subsequent layers. When the depolarization was sufficiently close to threshold, however, the firing synchrony between cells in the initial layers could no longer effect the propagation of FM signals. In this latter case, integrateand-fire neurons could be effectively modeled by simpler analog elements governed by a linear input-output relation.
Non-Boltzmann Dynamics in Networks of Spiking Neurons
Crair, Michael C., Bialek, William
We study networks of spiking neurons in which spikes are fired as a Poisson process. The state of a cell is determined by the instantaneous firing rate, and in the limit of high firing rates our model reduces to that studied by Hopfield. We find that the inclusion of spiking results in several new features, such as a noise-induced asymmetry between "on" and "off" states of the cells and probability currents which destroy the usual description of network dynamics in terms of energy surfaces. Taking account of spikes also allows us to calibrate network parameters such as "synaptic weights" against experiments on real synapses. Realistic forms of the post synaptic response alters the network dynamics, which suggests a novel dynamical learning mechanism.
Performance Comparisons Between Backpropagation Networks and Classification Trees on Three Real-World Applications
Atlas, Les E., Cole, Ronald A., Connor, Jerome T., El-Sharkawi, Mohamed A., II, Robert J. Marks, Muthusamy, Yeshwant K., Barnard, Etienne
In this paper we compare regression and classification systems. A regression system can generate an output f for an input X, where both X and f are continuous and, perhaps, multidimensional. A classification system can generate an output class, C, for an input X, where X is continuous and multidimensional and C is a member of a finite alphabet. The statistical technique of Classification And Regression Trees (CART) was developed during the years 1973 (Meisel and Michalpoulos) through 1984 (Breiman el al).
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neuralnetworks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response dueto perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methodsfor calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation ofefficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function.
Effects of Firing Synchrony on Signal Propagation in Layered Networks
Kenyon, G. T., Fetz, Eberhard E., Puff, R. D.
Spiking neurons which integrate to threshold and fire were used to study the transmission of frequency modulated (FM) signals through layered networks. Firing correlations between cells in the input layer were found to modulate the transmission of FM signals undercertain dynamical conditions. A tonic level of activity was maintained by providing each cell with a source of Poissondistributed synapticinput. When the average membrane depolarization produced by the synaptic input was sufficiently below threshold, the firing correlations between cells in the input layer could greatly amplify the signal present in subsequent layers. When the depolarization was sufficiently close to threshold, however, the firing synchrony between cells in the initial layers could no longer effect the propagation of FM signals. In this latter case, integrateand-fire neuronscould be effectively modeled by simpler analog elements governed by a linear input-output relation.
Non-Boltzmann Dynamics in Networks of Spiking Neurons
Crair, Michael C., Bialek, William
We study networks of spiking neurons in which spikes are fired as a Poisson process. The state of a cell is determined by the instantaneous firingrate, and in the limit of high firing rates our model reduces to that studied by Hopfield. We find that the inclusion of spiking results in several new features, such as a noise-induced asymmetry between "on" and "off" states of the cells and probability currentswhich destroy the usual description of network dynamics interms of energy surfaces. Taking account of spikes also allows usto calibrate network parameters such as "synaptic weights" against experiments on real synapses. Realistic forms of the post synaptic response alters the network dynamics, which suggests a novel dynamical learning mechanism.
Performance of Connectionist Learning Algorithms on 2-D SIMD Processor Arrays
Nuñez, Fernando J., Fortes, José A. B.
The mapping of the back-propagation and mean field theory learning algorithms onto a generic 2-D SIMD computer is described. This architecture proves to be very adequate for these applications since efficiencies close to the optimum can be attained. Expressions to find the learning rates are given and then particularized to the DAP array procesor.
Dynamic Behavior of Constained Back-Propagation Networks
It is generally admitted that generalization performance of back-propagation networks (Rumelhart, Hinton & Williams, 1986) will depend on the relative size ofthe training data and of the trained network. By analogy to curve-fitting and for theoretical considerations, the generalization performance of the network should decrease as the size of the network and the associated number of degrees of freedom increase (Rumelhart, 1987; Denker et al., 1987; Hanson & Pratt, 1989). This paper examines the dynamics of the standard back-propagation algorithm (BP) and of a constrained back-propagation variation (CBP), designed to adapt the size of the network to the training data base. The performance, learning dynamics and the representations resulting from the two algorithms are compared.
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural Adjoint Operator Algorithms 499