robustness tradeoff
We thank all reviewers for their positive reception of our paper and for their constructive feedback
We thank all reviewers for their positive reception of our paper and for their constructive feedback. On dual norms and prior work. Thank you for pointing us to the relevant prior work of Demontis et al. and Xu et al. which we apparently missed. We will discuss these connections between our work and the prior work of Demontis et al. and Xu et al. in the Nevertheless, as MNIST is the only vision dataset for which we've been able to train models to high levels of MNIST is clearly not solved from an adversarial robustness perspective. We think this is an interesting open problem for the community to consider.
Adversarial Training and Robustness for Multiple Perturbations
Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small $\ell_\infty$-noise). For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types. We prove that a trade-off in robustness to different types of $\ell_p$-bounded and spatial perturbations must exist in a natural and simple statistical setting. We corroborate our formal analysis by demonstrating similar robustness trade-offs on MNIST and CIFAR10. Building upon new multi-perturbation adversarial training schemes, and a novel efficient attack for finding $\ell_1$-bounded adversarial examples, we show that no model trained against multiple attacks achieves robustness competitive with that of models trained on each attack individually. In particular, we uncover a pernicious gradient-masking phenomenon on MNIST, which causes adversarial training with first-order $\ell_\infty, \ell_1$ and $\ell_2$ adversaries to achieve merely $50\%$ accuracy. Our results question the viability and computational scalability of extending adversarial robustness, and adversarial training, to multiple perturbation types.