Periodic Step Size Adaptation for Single Pass On-line Learning
Hsu, Chun-nan, Chang, Yu-ming, Huang, Hanshen, Lee, Yuh-jye
–Neural Information Processing Systems
It has been established that the second-order stochastic gradient descent (2SGD) method can potentially achieve generalization performance as well as empirical optimum in a single pass (i.e., epoch) through the training examples. However, 2SGD requires computing the inverse of the Hessian matrix of the loss function, which is prohibitively expensive. This paper presents Periodic Step-size Adaptation (PSA), which approximates the Jacobian matrix of the mapping function and explores a linear relation between the Jacobian and Hessian to approximate the Hessian periodically and achieve near-optimal results in experiments on a wide variety of models and tasks.
Neural Information Processing Systems
Dec-31-2009
- Country:
- North America > United States (0.69)
- Genre:
- Instructional Material > Online (0.50)
- Research Report > New Finding (0.47)
- Industry:
- Education > Educational Setting > Online (0.87)
- Technology: