PAC-Bayes unleashed: generalisation bounds with unbounded losses

Haddouche, Maxime, Guedj, Benjamin, Rivasplata, Omar, Shawe-Taylor, John

arXiv.org Machine Learning 

Since its emergence in the late 90s, the PAC-Bayes theory (see the seminal papers by Shawe-Taylor and Williamson, 1997 and McAllester, 1998, 1999, or the recent survey by Guedj, 2019) has been a powerful tool to obtain generalisation bounds and derive efficient learning algorithms. PAC-Bayes bounds were originally meant for binary classification problems (Seeger, 2002; Langford, 2005; Catoni, 2007) but the literature now includes many contributions involving any bounded loss function (without loss of generality, with values in r0; 1s), not just the binary loss. Generalisation bounds are helpful to ensure that a learning algorithm will have a good performance on future similar batches of data. Our goal is to provide new PAC-Bayesian generalisation bounds holding for unbounded loss functions, and thus extend the usability of PAC-Bayes to a much larger class of learning problems. Some ways to circumvent the bounded range assumption on the losses have been addressed in the recent literature.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found