Generalization of Back propagation to Recurrent and Higher Order Neural Networks
–Neural Information Processing Systems
Fernando J. Pineda Applied Physics Laboratory, Johns Hopkins University Johns Hopkins Rd., Laurel MD 20707 Abstract A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form.
Neural Information Processing Systems
Dec-31-1988
- Country:
- North America > United States
- California (0.14)
- Maryland > Prince George's County
- Laurel (0.24)
- North America > United States
- Technology: