k-bfg
A Pseudocode for K-BFGS/K-BFGS(L)
Algorithm 4 gives pseudocode for K-BFGS/K-BFGS(L), which is implemented in the experiments. In this section, we prove the convergence of Algorithm 5, a variant of K-BFGS(L). To accomplish this, we prove Lemmas 1-3, which in addition to Assumptions AS.1-2, ensure that all of the assumptions in Theorem 2.8 in [ Algorithm 6 SQN method for nonconvex stochastic optimization.Require: Given θ ( k 1) ( k 1) ( k 1) ( k 1) Hence, Theorem 2.8 of [41] applies to Algorithm 5, proving Theorem 2. Thus, we propose the following heuristic based on Powell's damped-BFGS approach In Powell's damping on H (see e.g. This is used in lines 2 and 3 of the DD (Algorithm 3). Our double damping (Algorithm 3) is a two-step damping procedure, where the first step (i.e.
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
Practical Quasi-Newton Methods for Training Deep Neural Networks
Goldfarb, Donald, Ren, Yi, Bahamou, Achraf
We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block-diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient $n$ is often of the order of tens of millions and the Hessian has $n^2$ elements. Consequently, computing and storing a full $n \times n$ BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC, which computes a Kronecker-factored block-diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded. In tests on autoencoder feed-forward neural network models with either nine or thirteen layers applied to three datasets, our methods outperformed or performed comparably to KFAC and state-of-the-art first-order stochastic methods.