Emergence of heavy tails in homogenized stochastic gradient descent

Jiao, Zhe, Keller-Ressel, Martin

arXiv.org Artificial Intelligence 

An important step in this direction It has repeatedly been observed that loss minimization by has been taken in Gurbuzbalaban et al. [2021], where the tail stochastic gradient descent leads to heavy-tailed distributions behavior of SGD iterates is characterized in dependence on of neural network parameters. Here, we analyze a continuous optimization parameters, dimension and Hessian curvature diffusion approximation of SGD, called homogenized stochastic at the loss minimum. One limitation of Gurbuzbalaban et al. gradient descent, show that it behaves asymptotically [2021] is that this link is described only qualitatively, but heavy-tailed, and give explicit upper and lower bounds on not quantitatively. Here, we provide an alternative approach its tail-index. We validate these bounds in numerical experiments through analyzing homogenized stochastic gradient descent, and show that they are typically close approximations a diffusion approximation of SGD introduced in Paquette to the empirical tail-index of SGD iterates.