Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Joshi, Nirmit, Vardi, Gal, Srebro, Nathan
–arXiv.org Artificial Intelligence
A recent realization is that although sometimes overfitting can be catastrophic, as suggested by our classic learning theory understanding, in other models overfitting, and even interpolation learning, i.e. insisting on zero training error of noisy data, might not be so catastrophic, allowing for good generalization (low test error) and even consistency [Zhang et al., 2017, Belkin et al., 2018]. This has led to efforts towards understanding the nature of overfitting: how benign or catastrophic it is, and what determines this behavior, in different settings and using different models.
arXiv.org Artificial Intelligence
Aug-1-2023