Almost Sure Saddle Avoidance of Stochastic Gradient Methods without the Bounded Gradient Assumption
–arXiv.org Artificial Intelligence
We prove that various stochastic gradient descent methods, including the stochastic gradient descent (SGD), stochastic heavy-ball (SHB), and stochastic Nesterov's accelerated gradient (SNAG) methods, almost surely avoid any strict saddle manifold. To the best of our knowledge, this is the first time such results are obtained for SHB and SNAG methods. Moreover, our analysis expands upon previous studies on SGD by removing the need for bounded gradients of the objective function and uniformly bounded noise. Instead, we introduce a more practical local boundedness assumption for the noisy gradient, which is naturally satisfied in empirical risk minimization problems typically seen in training of neural networks. Keywords: Stochastic gradient descent, stochastic heavy-ball, stochastic Nesterov's accelerated gradient, almost sure saddle avoidance
arXiv.org Artificial Intelligence
Feb-15-2023
- Country:
- Asia
- China > Hubei Province
- Wuhan (0.04)
- Middle East > Jordan (0.05)
- China > Hubei Province
- North America > Canada
- Ontario > Waterloo Region > Waterloo (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Technology: