Review for NeurIPS paper: Early-Learning Regularization Prevents Memorization of Noisy Labels
–Neural Information Processing Systems
The paper studies the following interesting phenomenon (observed in the previous literature): when trained on the dataset with incorrectly labeled points (i.e. "label noise"), DNNs first learn the benign ("correctly labeled") points and once this is done they start "memorizing" the noisy points. It was previously shown in the literature (empirically) that the second "memorization" phase hurts the generalization. The authors make 2 Contributions: (Contribution 1) They demonstrate (empirically and theoretically) that similar phenomenon can be observed in the simpler setting of the over-parametrized (dimensionality number of points) linear two-class logistic regression, when the class distributions are isotropic Gaussian with fixed means \pm mu and vanishing variance (see Theorem 1 and Figure A.1). (Contribution 2) Motivated by the theory of contribution 1, the authors propose a novel regularizer. When used in the vanilla DNN training with the cross-entropy loss, this regularizer successfully prevents the networks from falling to the "memorization phase" (as evidenced by Figure 1). All the reviewers agree that the topic and the focus of this paper is very timely.
Neural Information Processing Systems
Feb-7-2025, 18:17:35 GMT