Adversarial Training and Robustness for Multiple Perturbations

Neural Information Processing Systems 

Defenses against adversarial examples, such as adversarial training, are typically tailored to a single perturbation type (e.g., small \ell_\infty -noise). For other perturbations, these defenses offer no guarantees and, at times, even increase the model's vulnerability. Our aim is to understand the reasons underlying this robustness trade-off, and to train models that are simultaneously robust to multiple perturbation types. We prove that a trade-off in robustness to different types of \ell_p -bounded and spatial perturbations must exist in a natural and simple statistical setting. We corroborate our formal analysis by demonstrating similar robustness trade-offs on MNIST and CIFAR10.