Robust stochastic optimization with the proximal point method
Davis, Damek, Drusvyatskiy, Dmitriy
Stochastic convex optimization lies at the core of modern statistica l and machine learning. Standard results in the subject bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rarer, and typically either rely on "light-tails" assu mptions or exhibit worse sample complexity. To address this issue, we show that a wide c lass of stochastic algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level an d polylogarithmic in the condition number. We discuss consequences both for streamin g and offline algorithms. The procedure we propose, called proxBoost, is elementary and combines two well-known ingredients: robust distance estimation and the proximal point met hod.
Aug-1-2019
- Country:
- North America > United States > Washington > King County > Seattle (0.14)
- Genre:
- Research Report (0.40)
- Technology: