Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs

Chen, Lu, Yang, Han, Wang, Hu, Cao, Yuxin, Li, Shaofeng, Luo, Yuan

arXiv.org Artificial Intelligence 

--Adversarial examples have attracted significant attention over the years, yet understanding their frequency-based characteristics remains insufficient. In this paper, we investigate the intriguing properties of adversarial examples in the frequency domain for the image classification task, with the following key findings. These results suggest that different network architectures have different frequency preferences and that differences in frequency components between adversarial and natural examples may directly influence model robustness. Based on our findings, we further conclude with three useful proposals that serve as a valuable reference to the AI model security community. Despite the fact that deep neural networks (DNNs) achieve remarkable performance in many fields [1]-[3], their counterintuitive vulnerability attracts increasing attention, both for safety-critical applications [4], [5] and the black-box mechanism of DNNs [6], [7].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found