Detecting Adversarial Examples through Nonlinear Dimensionality Reduction
Crecchi, Francesco, Bacciu, Davide, Biggio, Battista
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.
May-1-2019
- Genre:
- Research Report > New Finding (0.49)
- Industry:
- Information Technology > Security & Privacy (0.31)
- Technology: