Detecting Adversarial Examples

Mumcu, Furkan, Yilmaz, Yasin

arXiv.org Artificial Intelligence 

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples. While numerous successful adversarial attacks have been proposed, defenses against these attacks remain relatively understudied. Existing defense approaches either focus on negating the effects of perturbations caused by the attacks to restore the DNNs' original predictions or use a secondary model to detect adversarial examples. However, these methods often become ineffective due to the continuous advancements in attack techniques. We propose a novel universal and lightweight method to detect adversarial examples by analyzing the layer outputs of DNNs. Through theoretical justification and extensive experiments, we demonstrate that our detection method is highly effective, compatible with any DNN architecture, and applicable across different domains, such as image, video, and audio. Goodfellow et al. (2014) demonstrated that deep neural networks (DNNs) are vulnerable to adversarial examples and proposed the Fast Gradient Sign Method (FGSM) to craft these adversarial examples by adding perturbations to the model inputs, leveraging the linear nature of DNNs. After the initial introduction of FGSM, various adversarial attacks were proposed across different domains. However, compared to the vast diversity among attack techniques, existing defense methods are built on a few different strategies.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found