Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Athalye, Anish, Carlini, Nicholas, Wagner, David
–arXiv.org Artificial Intelligence
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe characteristic behaviors of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely and 1 partially.
arXiv.org Artificial Intelligence
Feb-15-2018
- Country:
- North America > United States
- California (0.14)
- Massachusetts (0.14)
- North America > United States
- Industry:
- Government (0.68)
- Information Technology > Security & Privacy (0.46)
- Technology: