Review for NeurIPS paper: Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
–Neural Information Processing Systems
This paper treats the problem of online convex optimization without Lipschitz continuity of the loss functions. The authors consider a variant of Lipschitz continuity called "relative Lipschitz continuity": this notion is originally due to Lu (2019) and involves a Bregman divergence instead of the standard norm in comparing nearby points. In this context, the authors prove the following results: - Under only relative Lipschitz continuity: an O(sqrt{T}) regret bound for follow-the-regularized-leader (FTRL) and a "stabilized" variant of the online mirror descent (OMD) algorithm. These results are similar to standard bounds in the literature for Lipschitz continuous / strongly convex functions. The extension to *relative* Lipschitz continuous / strongly convex functions was welcomed by the reviewers, but two major issues were identified: 1. An earlier ICLR paper by Antonakopoulos et al. (2020) already provides O(\sqrt{T}) bounds for FTRL and OMD under a closely related "Riemannian Lipschitz continuity" condition.
Neural Information Processing Systems
Jan-27-2025, 20:01:51 GMT
- Genre:
- Research Report > New Finding (0.38)
- Industry:
- Education > Educational Setting > Online (0.40)
- Technology: