Adaptive multiple optimal learning factors for neural network training

Challagundla, Jeshwanth

arXiv.org Artificial Intelligence 

The Univer sity of Texas at Arlington, 2015 Sup ervising Professor: Michael Manry There is always an ambiguity in deciding the number of learning factors that is really required for training a Multi - Layer Perceptron. This thesis solves this problem by introducing a new method of adaptively changing the number of learning factors computed based on the error change created per multiply. A new method is introduced for computing learning factors for weights grouped based on the curvature of the objective function. A method for linearly compressing large ill - conditioned Newton's Hessian matrices to smaller well - conditioned ones is shown. This thesis also shows that the proposed training algorithm adapts itself between two other algorithms in order to produce a better error decrease per multiply. The performanc e of the proposed algorithm is shown to be better than OWO - MOLF and Levenberg Marquardt for most of the data sets.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found