Small Gradient Norm Regret for Online Convex Optimization
Gao, Wenzhi, He, Chang, Udell, Madeleine
This paper introduces a new problem-dependent regret measure for online convex optimization with smooth losses. The notion, which we call the $G^\star$ regret, depends on the cumulative squared gradient norm evaluated at the decision in hindsight $\sum_{t=1}^T \|\nabla \ell(x^\star)\|^2$. We show that the $G^\star$ regret strictly refines the existing $L^\star$ (small loss) regret, and that it can be arbitrarily sharper when the losses have vanishing curvature around the hindsight decision. We establish upper and lower bounds on the $G^\star$ regret and extend our results to dynamic regret and bandit settings. As a byproduct, we refine the existing convergence analysis of stochastic optimization algorithms in the interpolation regime. Some experiments validate our theoretical findings.
Jan-23-2026
- Country:
- Asia
- China > Shanghai
- Shanghai (0.04)
- Middle East > Jordan (0.04)
- China > Shanghai
- North America
- Canada > British Columbia (0.04)
- United States > California
- Santa Clara County > Palo Alto (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.34)
- Technology: