Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate
Mikhail Belkin, Daniel J. Hsu, Partha Mitra
–Neural Information Processing Systems
Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for "overfitted" / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in deep networks, kernel machines, boosting and random forests. Their performance is consistently robust even when the data contain large amounts of label noise. Very little theory is available to explain these observations. The vast majority of theoretical analyses of generalization allows for interpolation only when there is little or no label noise. This paper takes a step toward a theoretical foundation for interpolated classifiers by analyzing local interpolating schemes, including geometric simplicial interpolation algorithm and singularly weighted k-nearest neighbor schemes. Consistency or near-consistency is proved for these schemes in classification and regression problems.
Neural Information Processing Systems
Mar-27-2025, 03:03:03 GMT
- Country:
- Europe > Switzerland (0.46)
- North America > United States
- Ohio (0.40)
- Genre:
- Research Report (1.00)
- Technology: