PAC-Bayes Analysis Beyond the Usual Bounds

Neural Information Processing Systems 

We focus on a stochastic learning model where the learner observes a finite set of training examples and the output of the learning process is a data-dependent distribution over a space of hypotheses. The learned data-dependent distribution is then used to make randomized predictions, and the high-level theme addressed here is guaranteeing the quality of predictions on examples that were not seen during training, i.e. generalization. In this setting the unknown quantity of interest is the expected risk of the data-dependent randomized predictor, for which upper bounds can be derived via a P AC-Bayes analysis, leading to P AC-Bayes bounds. Specifically, we present a basic P AC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known P AC-Bayes bounds as well as novel bounds. We clarify the role of the requirements of fixed'data-free' priors, bounded losses, and i.i.d.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found