PAC-Bayes unleashed: generalisation bounds with unbounded losses
Haddouche, Maxime, Guedj, Benjamin, Rivasplata, Omar, Shawe-Taylor, John
Since its emergence in the late 90s, the PAC-Bayes theory (see the seminal papers by Shawe-Taylor and Williamson, 1997 and McAllester, 1998, 1999, or the recent survey by Guedj, 2019) has been a powerful tool to obtain generalisation bounds and derive efficient learning algorithms. PAC-Bayes bounds were originally meant for binary classification problems (Seeger, 2002; Langford, 2005; Catoni, 2007) but the literature now includes many contributions involving any bounded loss function (without loss of generality, with values in r0; 1s), not just the binary loss. Generalisation bounds are helpful to ensure that a learning algorithm will have a good performance on future similar batches of data. Our goal is to provide new PAC-Bayesian generalisation bounds holding for unbounded loss functions, and thus extend the usability of PAC-Bayes to a much larger class of learning problems. Some ways to circumvent the bounded range assumption on the losses have been addressed in the recent literature.
Sep-30-2020
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States
- New York (0.14)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (0.46)
- Technology: