Adversarial Robustness via Adversarial Label-Smoothing

Goibert, Morgane, Dohmatob, Elvis

arXiv.org Artificial Intelligence 

We study Label-Smoothing as a means for improving adversarial robustness of supervised deep-learning models. After establishing a thorough and unified framework, we propose several novel Label-Smoothing methods: adversarial, Boltzmann and second-best Label-Smoothing methods. On various datasets (MNIST, CI-FAR10, SVHN) and models (linear models, MLPs, LeNet, ResNet), we show that these methods improve adversarial robustness against a variety of attacks (FGSM, BIM, DeepFool, Carlini-Wargner) by better taking account of the dataset geometry. These proposed Label-Smoothing methods have two main advantages: they can be implemented as a modified cross-entropy loss, thus do not require any modifications of the network architecture nor do they lead to increased training times, and they improve both standard and adversarial accuracy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found