Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
–Neural Information Processing Systems
We show that label noise exists in adversarial training. Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples - the true label distribution is distorted by the adversarial perturbation, but is neglected by the common practice that inherits labels from clean examples. Recognizing label noise sheds insights on the prevalence of robust overfitting in adversarial training, and explains its intriguing dependence on perturbation radius and data quality.
Neural Information Processing Systems
Dec-24-2025, 10:33:44 GMT
- Technology: