Goto

Collaborating Authors

 unbounded loss


No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling Supplementary Appendix May 10, 2023 A. Omitted Proofs

Neural Information Processing Systems

We now prove the first bullet. This is a contradiction, so in fact c κ. The first claim of the second bullet is analogous. To do so, we note the following technical lemma (proof below). To prove (#1), we proceed by induction on t.


No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

Neural Information Processing Systems

For each of $T$ time steps, $m$ experts report probability distributions over $n$ outcomes; we wish to learn to aggregate these forecasts in a way that attains a no-regret guarantee. We focus on the fundamental and practical aggregation method known as *logarithmic pooling* -- a weighted average of log odds -- which is in a certain sense the optimal choice of pooling method if one is interested in minimizing log loss (as we take to be our loss function). We consider the problem of learning the best set of parameters (i.e.





Reviews: Fast learning rates with heavy-tailed losses

Neural Information Processing Systems

This paper provides some new results in an important area which is receiving more and more attention: fast rates when loss functions are unbounded and heavy-tailed. Existing results based on empirical process theory often rely on bounded or sub-Gaussian loss, and the heavy tails (hence non-sub-Gaussian) case is considerably harder. The results presented seem sound and are definitely novel. They rely on results of Sara van de Geer and collaborators on concentration inequalities for unbounded empirical processes. The material is very technical and I would suggest moving even some more material to the appendix.


No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

Neural Information Processing Systems

For each of T time steps, m experts report probability distributions over n outcomes; we wish to learn to aggregate these forecasts in a way that attains a no-regret guarantee. We focus on the fundamental and practical aggregation method known as *logarithmic pooling* -- a weighted average of log odds -- which is in a certain sense the optimal choice of pooling method if one is interested in minimizing log loss (as we take to be our loss function). We consider the problem of learning the best set of parameters (i.e. We assume (by necessity) that the adversarial choices of outcomes and forecasts are consistent, in the sense that experts report calibrated forecasts. Imposing this constraint creates a (to our knowledge) novel semi-adversarial setting in which the adversary retains a large amount of flexibility.


A note on generalization bounds for losses with finite moments

Rodríguez-Gálvez, Borja, Rivasplata, Omar, Thobaben, Ragnar, Skoglund, Mikael

arXiv.org Machine Learning

This paper studies the truncation method from Alquier [1] to derive high-probability PAC-Bayes bounds for unbounded losses with heavy tails. Assuming that the $p$-th moment is bounded, the resulting bounds interpolate between a slow rate $1 / \sqrt{n}$ when $p=2$, and a fast rate $1 / n$ when $p \to \infty$ and the loss is essentially bounded. Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance. This bound has an exponentially better dependence on the confidence parameter and the dependency measure than previous bounds in the literature. Finally, the paper extends all results to guarantees in expectation and single-draw PAC-Bayes. In order to so, it obtains analogues of the PAC-Bayes fast rate bound for bounded losses from [2] in these settings.



PAC-Bayes-Chernoff bounds for unbounded losses

Casado, Ioar, Ortega, Luis A., Masegosa, Andrés R., Pérez, Aritz

arXiv.org Machine Learning

We present a new high-probability PAC-Bayes oracle bound for unbounded losses. This result can be understood as a PAC-Bayes version of the Chernoff bound. The proof technique relies on uniformly bounding the tail of certain random variable based on the Cram\'er transform of the loss. We highlight two applications of our main result. First, we show that our bound solves the open problem of optimizing the free parameter on many PAC-Bayes bounds. Finally, we show that our approach allows working with flexible assumptions on the loss function, resulting in novel bounds that generalize previous ones and can be minimized to obtain Gibbs-like posteriors.