Cross-validation Confidence Intervals for Test Error Pierre Bayle
–Neural Information Processing Systems
This work develops central limit theorems for cross-validation and consistent estimators of its asymptotic variance under weak stability conditions on the learning algorithm. Together, these results provide practical, asymptotically-exact confidence intervals for k -fold test error and valid, powerful hypothesis tests of whether one learning algorithm has smaller k -fold test error than another. These results are also the first of their kind for the popular choice of leave-one-out cross-validation. In our real-data experiments with diverse learning algorithms, the resulting intervals and tests outperform the most popular alternative methods from the literature.
Neural Information Processing Systems
Aug-16-2025, 03:37:54 GMT
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America
- Canada (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Illinois > Cook County
- Chicago (0.04)
- California > San Francisco County
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (0.68)
- Industry:
- Health & Medicine (1.00)
- Technology: