adaptive back-propagation
Adaptive Back-Propagation in On-Line Learning of Multilayer Networks
An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)