On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Athalye, Anish, Carlini, Nicholas

arXiv.org Machine Learning 

Neural networks are known to be vulnerable to adversarial examples. In this note, we evaluate the two white-box defenses that appeared at CVPR 2018 and find they are ineffective: when applying existing techniques, we can reduce the accuracy of the defended models to 0%.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found