Thwarting Adversarial Examples: An $L_0$-Robust Sparse Fourier Transform
Bafna, Mitali, Murtagh, Jack, Vyas, Nikhil
–Neural Information Processing Systems
We give a new algorithm for approximating the Discrete Fourier transform of an approximately sparse signal that is robust to worst-case $L_0$ corruptions, namely that some coordinates of the signal can be corrupt arbitrarily. Our techniques generalize to a wide range of linear transformations that are used in data analysis such as the Discrete Cosine and Sine transforms, the Hadamard transform, and their high-dimensional analogs. We use our algorithm to successfully defend against worst-case $L_0$ adversaries in the setting of image classification. We give experimental results on the Jacobian-based Saliency Map Attack (JSMA) and the CW $L_0$ attack on the MNIST and Fashion-MNIST datasets as well as the Adversarial Patch on the ImageNet dataset.
Neural Information Processing Systems
Dec-31-2018
- Country:
- Asia > Japan
- Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Denmark > Capital Region
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- California > Santa Clara County
- San Jose (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- New York > New York County
- New York City (0.04)
- Oregon > Multnomah County
- Portland (0.04)
- California > Santa Clara County
- Canada > Quebec
- Asia > Japan
- Genre:
- Research Report (0.68)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Technology: