Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks

Zhang, Xiaomei, Zhang, Zhaoxi, Zhang, Yanjun, Zheng, Xufei, Zhang, Leo Yu, Hu, Shengshan, Pan, Shirui

arXiv.org Artificial Intelligence 

--T extual adversarial examples pose serious threats to the reliability of natural language processing systems. Recent studies suggest that adversarial examples tend to deviate from the underlying manifold of normal texts, whereas pre-trained masked language models can approximate the manifold of normal data. These findings inspire the exploration of masked language models for detecting textual adversarial attacks. We first introduce Masked Language Model-based Detection (MLMD), leveraging the mask and unmask operations of the masked language modeling (MLM) objective to induce the difference in manifold changes between normal and adversarial texts. Although MLMD achieves competitive detection performance, its exhaustive one-by-one masking strategy introduces significant computational overhead. Our posterior analysis reveals that a significant number of non-keywords in the input are not important for detection but consume resources. Building on this, we introduce Gradient-guided MLMD (GradMLMD), which leverages gradient information to identify and skip non-keywords during detection, significantly reducing resource consumption without compromising detection performance. Extensive experiments show that GradMLMD maintains comparable or better performance than MLMD and outperforms existing detectors. Among defenses based on the off-manifold conjecture, GradMLMD presents a novel method for capturing manifold changes and provides a practical solution for real-world application challenges. Index T erms --NLP, adversarial attack, adversarial defense, masked language model. L THOUGH advanced deep neural networks have the potential to revolutionize the performance of myriad natural language processing (NLP) tasks [1-3], they are highly vulnerable to adversarial attacks [4-7]. Through carefully manipulated inputs, attackers can drive models to produce erroneous outputs to their advantage. Many researchers have focused on introducing adversarial perturbations into the input by altering entire sentences. However, predominant efforts have been made to develop attacks at the word-level and character-level [8-14]. Correspondence to Dr. L. Zhang and Prof. X. Zheng Xiaomei Zhang, Leo Y u Zhang and Shirui Pan are with the School of Information and Communication Technology, Griffith University, Queensland, Australia (e-mail: xiaomei.zhang@griffithuni.edu.au, Zhaoxi Zhang and Y anjun Zhang are with the School of Computer Science, University of Technology Sydney, Sydney, New South Wales, Australia (email: Zhaoxi.Zhang-1@student.uts.edu.au, Xufei Zheng is with the College of Computer and Information Science, Southwest University, Chongqing, China (e-mail: zxufei@swu.edu.cn).