High Probability Convergence of Stochastic Gradient Methods
Liu, Zijian, Nguyen, Ta Duy, Nguyen, Thien Hang, Ene, Alina, Nguyen, Huy Lê
–arXiv.org Artificial Intelligence
Stochastic optimization is a fundamental area with extensive applications in many domains, ranging from machine learning to algorithm design and beyond. The design and analysis of iterative methods for stochastic optimization has been the focus of a long line of work, leading to a rich understanding of the convergence of paradigmatic iterative methods such as stochastic gradient descent, mirror descent, and accelerated methods for both convex and non-convex optimization. However, most of these works only establish convergence guarantees that hold only in expectation. Although very meaningful, these results do not fully capture the convergence behaviors of the algorithms when we perform only a small number of runs of the algorithm, as it is typical in modern machine learning applications where there are significant computational and statistical costs associated with performing multiple runs of the algorithm (Harvey et al., 2019; Madden et al., 2020; Davis et al., 2021). Thus, an important direction is to establish convergence guarantees for a single run of the algorithm that hold not only in expectation but also with high probability. Compared to the guarantees that hold in expectation, high probability guarantees are significantly harder to obtain and they hold in more limited settings with stronger assumptions on the problem settings and the stochastic noise distribution. Most existing works that establish high probability guarantees focus on the setting where the length of the stochastic noise follows a light-tail (sub-Gaussian) distribution (Juditsky et al., 2011; Lan, 2012, 2020; Li and Orabona, 2020; Madden et al., 2020; Kavis et al., 2021). Recent works also study the more challenging heavy-tail setting, notably under a bounded variance (Nazin et al., 2019; Gorbunov et al., 2020; Cutkosky and Mehta, 2021) or bounded p-moment assumption (Cutkosky and Mehta, 2021) on the length of the stochastic noise. Both settings are highly relevant in practice: Zhang et al. (2020) empirically studied the noise distribution for two common tasks, training a
arXiv.org Artificial Intelligence
Feb-28-2023