Asymptotic normality and optimality in nonsmooth stochastic approximation

Davis, Damek, Drusvyatskiy, Dmitriy, Jiang, Liwei

arXiv.org Machine Learning 

Polyak and Juditsky [30] famously showed that the stochastic gradient method for minimizing smooth and strongly convex functions enjoys a central limit theorem: the error between the running average of the iterates and the minimizer, normalized by the square root of the iteration counter, converges to a normal random vector. Moreover, the covariance matrix of the limiting distribution is in a precise sense "optimal" among any estimation procedure. A long standing open question is whether similar guarantees - asymptotic normality and optimality - exist for nonsmooth optimization and, more generally, for equilibrium problems. In this work, we obtain such guarantees under mild conditions that hold both in concrete circumstances (e.g.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found