Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate

Mikhail Belkin, Daniel J. Hsu, Partha Mitra

Neural Information Processing Systems 

Many modern machine learning models are trained to achieve zero or near-zero training error in order to obtain near-optimal (but non-zero) test error. This phenomenon of strong generalization performance for "overfitted" / interpolated classifiers appears to be ubiquitous in high-dimensional data, having been observed in

Similar Docs  Excel Report  more

TitleSimilaritySource
None found