Goto

Collaborating Authors

 early-learning regularization prevent memorization


Review for NeurIPS paper: Early-Learning Regularization Prevents Memorization of Noisy Labels

Neural Information Processing Systems

Weaknesses: I have many reservation against the claims of the paper. I would appreciate it if the authors can clarify some of these issues during their rebuttal. First, the proof of their main theorem about logistic regression has many issues. One key issue is that the authors make assumptions within the proof that are not clearly stated or justified upfront. For example, in Line 440 in the supplementary materials, the proof assumes that theta Tv .1.


Review for NeurIPS paper: Early-Learning Regularization Prevents Memorization of Noisy Labels

Neural Information Processing Systems

The paper studies the following interesting phenomenon (observed in the previous literature): when trained on the dataset with incorrectly labeled points (i.e. "label noise"), DNNs first learn the benign ("correctly labeled") points and once this is done they start "memorizing" the noisy points. It was previously shown in the literature (empirically) that the second "memorization" phase hurts the generalization. The authors make 2 Contributions: (Contribution 1) They demonstrate (empirically and theoretically) that similar phenomenon can be observed in the simpler setting of the over-parametrized (dimensionality number of points) linear two-class logistic regression, when the class distributions are isotropic Gaussian with fixed means \pm mu and vanishing variance (see Theorem 1 and Figure A.1). (Contribution 2) Motivated by the theory of contribution 1, the authors propose a novel regularizer. When used in the vanilla DNN training with the cross-entropy loss, this regularizer successfully prevents the networks from falling to the "memorization phase" (as evidenced by Figure 1). All the reviewers agree that the topic and the focus of this paper is very timely.


Early-Learning Regularization Prevents Memorization of Noisy Labels

Neural Information Processing Systems

We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach.