The Efficiency and the Robustness of Natural Gradient Descent Learning Rule

Yang, Howard Hua, Amari, Shun-ichi

Neural Information Processing Systems 

The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer and multi-layer perceptrons. We have discovered a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron. Based on this scheme, we have designed an algorithm to compute the natural gradient. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n). It is confirmed by simulations that the natural gradient descent learning rule is not only efficient but also robust.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found