The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains
–Neural Information Processing Systems
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents.
Neural Information Processing Systems
Dec-23-2025, 22:53:30 GMT
- Technology: