Improved Image Wasserstein Attacks and Defenses
Hu, Edward J., Swaminathan, Adith, Salman, Hadi, Yang, Greg
–arXiv.org Artificial Intelligence
A recently proposed Wasserstein distance-bounded threat model is a promising alternative that limits the perturbation to pixel mass movements. We point out and rectify flaws in the previous definition of the Wasserstein threat model and explore stronger attacks and defenses under our better-defined framework. Lastly, we discuss the inability of current Wasserstein-robust models in defending against perturbations seen in the real world. We will release our code and trained models upon publication. Deep learning approaches to computer vision tasks, such as image classification, are not robust. For example, a data point that is classified correctly can be modified in a nearly imperceptible way to cause the classifier to misclassify it (Szegedy et al., 2013; Goodfellow et al., 2015).
arXiv.org Artificial Intelligence
May-9-2023