SNOO: Step-K Nesterov Outer Optimizer - The Surprising Effectiveness of Nesterov Momentum Applied to Pseudo-Gradients
Kallusky, Dominik, Rao, Vinay, Nandavanam, Vishal, Shi, Hao-Jun Michael
–arXiv.org Artificial Intelligence
The rapid development of large language models (LLMs) has driven the demand for more efficient optimization techniques. Among these, the Lookahead family of optimizers employs a two-loop framework, maintaining fast and slow sets of model weights. Multiple inner optimizer steps on the fast weights produce a trajectory - the pseudo-gradient - that is used to update the slow weights. DiLoCo, a notable example originally designed for distributed training, applies Nesterov momentum to the averaged pseudo-gradient from multiple workers, claiming to even outperform AdamW in a non-distributed setup. In this paper, we empirically show that DiLoCo's surprising effectiveness stems primarily from applying Nesterov momentum to the pseudo-gradient, which improves training in a non-distributed setting. We call this Lookahead variant the Step-$K$ Nesterov Outer Optimizer (SNOO). We demonstrate that SNOO achieves compute factor gains of 1.5 - 2.5$\times$ in a non-distributed setting up to a scale of 1e23 training FLOPs, with improvements that increase with model size. Because of its minimal compute and memory overhead and compatibility with model sharding, SNOO is a practical enhancement for a variety of inner optimizers, including AdamW and Muon.
arXiv.org Artificial Intelligence
Oct-20-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California
- San Mateo County > Menlo Park (0.04)
- Santa Clara County > Palo Alto (0.04)
- Georgia > Fulton County
- Atlanta (0.04)
- California
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Technology: