Adaptive multiple optimal learning factors for neural network training
–arXiv.org Artificial Intelligence
The Univer sity of Texas at Arlington, 2015 Sup ervising Professor: Michael Manry There is always an ambiguity in deciding the number of learning factors that is really required for training a Multi - Layer Perceptron. This thesis solves this problem by introducing a new method of adaptively changing the number of learning factors computed based on the error change created per multiply. A new method is introduced for computing learning factors for weights grouped based on the curvature of the objective function. A method for linearly compressing large ill - conditioned Newton's Hessian matrices to smaller well - conditioned ones is shown. This thesis also shows that the proposed training algorithm adapts itself between two other algorithms in order to produce a better error decrease per multiply. The performanc e of the proposed algorithm is shown to be better than OWO - MOLF and Levenberg Marquardt for most of the data sets.
arXiv.org Artificial Intelligence
Jun-4-2024
- Country:
- Europe
- Romania > Vest Development Region
- Arad County > Arad (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Romania > Vest Development Region
- North America > United States
- California
- Orange County > Irvine (0.04)
- San Diego County > San Diego (0.04)
- New York > New York County
- New York City (0.04)
- Texas (0.24)
- Washington > Whatcom County
- Bellingham (0.04)
- California
- South America > Uruguay
- Europe
- Genre:
- Research Report (0.82)
- Technology: