Fooling Adversarial Training with Inducing Noise

Wang, Zhirui, Wang, Yifei, Wang, Yisen

arXiv.org Machine Learning 

Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack. However, in this paper, we show that when trained on one type of poisoned data, adversarial training can also be fooled to have catastrophic behavior, e.g., 1% robust test accuracy with 90% robust training accuracy on CIFAR-10 dataset. Previously, there are other types of noise poisoned in the training data that have successfully fooled standard training (15.8% standard test accuracy with 99.9% standard training accuracy on CIFAR-10 dataset), but their poisonings can be easily removed when adopting adversarial training. Therefore, we aim to design a new type of inducing noise, named ADVIN, which is an irremovable poisoning of training data. ADVIN can not only degrade the robustness of adversarial training by a large margin, for example, from 51.7% to 0.57% on CIFAR-10 dataset, but also be effective for fooling standard training (13.1% standard test accuracy with 100% standard training accuracy). Additionally, ADVIN can be applied to preventing personal data (like selfies) from being exploited without authorization under whether standard or adversarial training. In recent years, deep learning has achieved great success, while the existence of adversarial examples (Szegedy et al., 2014) alerts us that existing deep neural networks are very vulnerable to adversarial attack. Adversarial Training (AT) is currently the most effective approach against adversarial examples (Madry et al., 2017; Athalye et al., 2018). In practice, adversarially trained models have been shown good robustness under various attack, and the recent state-of-the-art defense algorithms (Zhang et al., 2019; Wang et al., 2020) are all variants of adversarial training. Therefore, it is widely believed that we have already found the cure to adversarial attack, i.e., adversarial training, based on which we can build trustworthy models to a certain degree.