Goto

Collaborating Authors

 regret minimization





Near-optimal Swap Regret Minimization for Convex Losses

Hu, Lunjia, Schneider, Jon, Wu, Yifan

arXiv.org Machine Learning

We give a randomized online algorithm that guarantees near-optimal $\widetilde O(\sqrt T)$ expected swap regret against any sequence of $T$ adaptively chosen Lipschitz convex losses on the unit interval. This improves the previous best bound of $\widetilde O(T^{2/3})$ and answers an open question of Fishelson et al. [2025b]. In addition, our algorithm is efficient: it runs in $\mathsf{poly}(T)$ time. A key technical idea we develop to obtain this result is to discretize the unit interval into bins at multiple scales of granularity and simultaneously use all scales to make randomized predictions, which we call multi-scale binning and may be of independent interest. A direct corollary of our result is an efficient online algorithm for minimizing the calibration error for general elicitable properties. This result does not require the Lipschitzness assumption of the identification function needed in prior work, making it applicable to median calibration, for which we achieve the first $\widetilde O(\sqrt T)$ calibration error guarantee.


Response to Reviewer 2: Empirical evaluation: Interestingly, we actually did an empirical evaluation in the earlier

Neural Information Processing Systems

We thank the reviewers for the positive feedback and their interest in our work! Below we address some questions. Both algorithms are well-tuned for hyperparameters. We didn't include it in the submission because after all the We will make sure to define them earlier in the paper in the revision. We are happy to clarify them.



EfficientMethodsforNon-stationaryOnlineLearning

Neural Information Processing Systems

Inparticular, dynamic regret [Zinkevich,2003;Zhang et al.,2018a]and adaptiveregret [Hazan and Seshadhri, 2009; Daniely et al., 2015] are proposed as two principled metrics to guide the algorithm design. Theunknowncomparators orunknown intervals bring considerable uncertainty to online optimization.