Goto

Collaborating Authors

 adversarial training







Appendices

Neural Information Processing Systems

The supplementary material is organized as follows. We first discuss additional related work and provide experiment details inSection 2andAppendix Brespectively. Adversarial Defenses: Neural networks trained using standard procedures such as SGD are extremely vulnerable [23] to -bound adversarial attacks such as FGSM [23], PGD [42], CW [11], andMomentum [17];Unrestricted attacks [7,19]cansignificantly degrade model performance as well. Defense strategies based on heuristics such as feature squeezing [82], denoising [80], encoding [10], specialized nonlinearities [83] and distillation [56] have had limited success against stronger attacks [2]. Then, we introduce a noisy version of the5-slab block,whichwelateruseinAppendixD.