Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization
Mangasarian, O. L., Solodov, M. V.
–Neural Information Processing Systems
The fundamental backpropagation (BP) algorithm for training artificial neuralnetworks is cast as a deterministic nonmonotone perturbed gradientmethod. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. Theresults presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay. 1 INTRODUCTION
Neural Information Processing Systems
Dec-31-1994
- Country:
- North America > United States
- California (0.28)
- Wisconsin > Dane County
- Madison (0.14)
- North America > United States
- Technology: