Goto

Collaborating Authors

 Learning Management


Online Learning with a Hint

Ofer Dekel, arthur flajolet, Nika Haghtalab, Patrick Jaillet

Neural Information Processing Systems

We study a variant of online linear optimization where the player receives a hint about the loss function at the beginning of each round. The hint is given in the form of a vector that is weakly correlated with the loss vector on that round.







Adaptive Oracle-Efficient Online Learning Guanghui Wang

Neural Information Processing Systems

The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale. More recent sophisticated techniques, which we refer to as oracle-efficient methods, address this problem by dispatching to an offline optimization oracle that can search through an exponentially-large (or even infinite) space of decisions and select that which performed the best on any dataset. But despite the benefits of computational feasibility, oracle-efficient algorithms exhibit one major limitation: while performing well in worst-case settings, they do not adapt well to friendly environments. In this paper we consider two such friendly scenarios, (a) "small-loss" problems and (b) IID data. We provide a new framework for designing follow-the-perturbed-leader algorithms that are oracle-efficient and adapt well to the small-loss environment, under a particular condition which we call approximability (which is spiritually related to sufficient conditions provided in ( Dudík et al., 2020)). We identify a series of real-world settings, including online auctions and transductive online classification, for which approximability holds. We also extend the algorithm to an IID data setting and establish a "best-of-both-worlds" bound in the oracle-efficient setting.


A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of Θ(T2 / 3) and its Application to Best-of-Both-Worlds

Neural Information Processing Systems

Follow-the-Regularized-Leader (FTRL) is a powerful framework for various online learning problems. By designing its regularizer and learning rate to be adaptive to past observations, FTRL is known to work adaptively to various properties of an underlying environment.


Private Online Learning via Lazy Algorithms

Neural Information Processing Systems

We study the problem of private online learning, focusing on online prediction from experts (OPE) and online convex optimization (OCO).