Non-convex Finite-Sum Optimization Via SCSG Methods
Lei, Lihua, Ju, Cheng, Chen, Jianbo, Jordan, Michael I.
–Neural Information Processing Systems
We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods , for the smooth nonconvex finite-sum optimization problem. Only assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with $E \|\nabla f(x)\|^{2}\le \epsilon$ is $O(\min\{\epsilon^{-5/3}, \epsilon^{-1}n^{2/3}\})$, which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss.
Neural Information Processing Systems
Dec-31-2017
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.66)
- Industry:
- Education (0.46)
- Technology: