Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

Fanny Yang, Zuowen Wang, Christina Heinze-Deml

Neural Information Processing Systems 

This work provides theoretical and empirical evidence that invariance-inducing regularizers can increase predictive accuracy for worst-case spatial transformations (spatial robustness). Evaluated on these adversarially transformed examples, we demonstrate that adding regularization on top of standard augmented or adversarial training reduces the relative robust error on CIFAR-10 by 20% with minimal computational overhead. Similar relative gains hold for SVHN and CIFAR-100. Regularized augmentation-based methods in fact even outperform handcrafted networks that were explicitly designed to be spatial-equivariant. Furthermore, we observe for SVHN, known to have inherent variance in orientation, that robust training also improves standard accuracy on the test set. We prove that this notrade-off phenomenon holds for adversarial examples from transformation groups in the infinite data limit.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found