Goto

Collaborating Authors

 pac-bayes




Toward Better PAC-Bayes Bounds for Uniformly Stable Algorithms Yunwen Lei 2

Neural Information Processing Systems

We give sharper bounds for uniformly stable randomized algorithms in a PAC-Bayesian framework, which improve the existing results by up to a factor of n (ignoring a log factor), where n is the sample size. The key idea is to bound the moment generating function of the generalization gap using concentration of weakly dependent random variables due to Bousquet et al (2020). We introduce an assumption of sub-exponential stability parameter, which allows a general treatment that we instantiate in two applications: stochastic gradient descent and randomized coordinate descent. Our results eliminate the requirement of strong convexity from previous results, and hold for non-smooth convex problems.




On Margins and Generalisation for Voting Classifiers

Neural Information Processing Systems

We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the "margins theory" proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers.



PAC-BayesLearningBoundsfor Sample-DependentPriors

Neural Information Processing Systems

Sample-dependent priors are also relevant in emerging scenarios such as adversarial training. A common practice here is to smooth a given classifier by injecting Gaussian noise into the inputs.


ControllingMultipleErrorsSimultaneouslywitha PAC-BayesBound

Neural Information Processing Systems

Wetransform our bound into adifferentiable training objective. Our bound is especially useful in cases where the severity of different mis-classifications may change overtime; existing PAC-Bayes bounds canonly bound aparticular pre-decided weighting oftheerror types.


PAC-Bayes bounds for stable algorithms with instance-dependent priors

Neural Information Processing Systems

PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper the PAC-Bayes approach is combined with stability of the hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting is used with a Gaussian prior centered at the expected output. Thus a novelty of our paper is using priors defined in terms of the data-generating distribution. Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients. We also provide a new bound for the SVM classifier, which is compared to other known bounds experimentally. Ours appears to be the first uniform hypothesis stability-based bound that evaluates to non-trivial values.