A Limitation of the PAC-Bayes Framework
–Neural Information Processing Systems
PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution-and algorithm-dependent bounds, which are often tighter than VCrelated uniform convergence bounds. In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis. Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just O(log(1/δ)/ɛ) examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large.
Neural Information Processing Systems
Mar-21-2025, 16:29:30 GMT
- Country:
- Asia > Middle East
- Israel (0.28)
- North America > United States (0.28)
- Asia > Middle East
- Genre:
- Research Report (0.46)
- Technology: