Constant Regret, Generalized Mixability, and Mirror Descent
Zakaria Mhammedi, Robert C. Williamson
–Neural Information Processing Systems
We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and for the right choice of loss function and "mixing" algorithm, it is possible for the learner to achieve a constant regret regardless of the number of prediction rounds. For example, a constant regret can be achieved for mixable losses using the aggregating algorithm. The Generalized Aggregating Algorithm (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the aggregating algorithm when using the Shannon entropy S. For a given entropy Φ, losses for which a constant regret is possible using the GAA are called Φ-mixable. Which losses are Φ-mixable was previously left as an open question. We fully characterize Φ-mixability and answer other open questions posed by [6]. We show that the Shannon entropy S is fundamental in nature when it comes to mixability; any Φ-mixable loss is necessarily S-mixable, and the lowest worst-case regret of the GAA is achieved using the Shannon entropy. Finally, by leveraging the connection between the mirror descent algorithm and the update step of the GAA, we suggest a new adaptive generalized aggregating algorithm and analyze its performance in terms of the regret bound.
Neural Information Processing Systems
Oct-8-2024, 00:05:23 GMT