Single Word Change is All You Need: Designing Attacks and Defenses for Text Classifiers
Xu, Lei, Alnegheimish, Sarah, Berti-Equille, Laure, Cuesta-Infante, Alfredo, Veeramachaneni, Kalyan
–arXiv.org Artificial Intelligence
In text classification, creating an adversarial example means subtly perturbing a few words in a sentence without changing its meaning, causing it to be misclassified by a classifier. A concerning observation is that a significant portion of adversarial examples generated by existing methods change only one word. This single-word perturbation vulnerability represents a significant weakness in classifiers, which malicious users can exploit to efficiently create a multitude of adversarial examples. This paper studies this problem and makes the following key contributions: (1) We introduce a novel metric \r{ho} to quantitatively assess a classifier's robustness against single-word perturbation. (2) We present the SP-Attack, designed to exploit the single-word perturbation vulnerability, achieving a higher attack success rate, better preserving sentence meaning, while reducing computation costs compared to state-of-the-art adversarial methods. (3) We propose SP-Defense, which aims to improve \r{ho} by applying data augmentation in learning. Experimental results on 4 datasets and BERT and distilBERT classifiers show that SP-Defense improves \r{ho} by 14.6% and 13.9% and decreases the attack success rate of SP-Attack by 30.4% and 21.2% on two classifiers respectively, and decreases the attack success rate of existing attack methods that involve multiple-word perturbations.
arXiv.org Artificial Intelligence
Jan-30-2024
- Country:
- North America > United States (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (0.94)
- Media (1.00)
- Technology: