Optimal Regularized Dual Averaging Methods for Stochastic Optimization
Chen, Xi, Lin, Qihang, Pena, Javier
–Neural Information Processing Systems
This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss. In particular, for strongly convex loss, it achieves the optimal rate of $O(\frac{1}{N} \frac{1}{N 2})$ for $N$ iterations, which improves the best known rate $O(\frac{\log N }{N})$ of previous stochastic dual averaging algorithms. We further develop a multi-stage extension using the proposed algorithm as a subroutine, which achieves the uniformly-optimal rate $O(\frac{1}{N} \exp\{-N\})$ for strongly convex loss. Papers published at the Neural Information Processing Systems Conference.
Neural Information Processing Systems
Feb-14-2020, 21:42:54 GMT
- Technology: