No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling
–Neural Information Processing Systems
For each of $T$ time steps, $m$ experts report probability distributions over $n$ outcomes; we wish to learn to aggregate these forecasts in a way that attains a no-regret guarantee. We focus on the fundamental and practical aggregation method known as *logarithmic pooling* -- a weighted average of log odds -- which is in a certain sense the optimal choice of pooling method if one is interested in minimizing log loss (as we take to be our loss function). We consider the problem of learning the best set of parameters (i.e.
Neural Information Processing Systems
Dec-24-2025, 22:33:05 GMT
- Technology: