DARD: Dice Adversarial Robustness Distillation against Adversarial Attacks
Zou, Jing, Zhang, Shungeng, Qiu, Meikang, Li, Chong
–arXiv.org Artificial Intelligence
Deep learning models are vulnerable to adversarial examples, posing critical security challenges in real-world applications. While Adversarial Training (AT ) is a widely adopted defense mechanism to enhance robustness, it often incurs a trade-off by degrading performance on unperturbed, natural data. Recent efforts have highlighted that larger models exhibit enhanced robustness over their smaller counterparts. In this paper, we empirically demonstrate that such robustness can be systematically distilled from large teacher models into compact student models. To achieve better performance, we introduce Dice Adversarial Robustness Distillation (DARD), a novel method designed to transfer robustness through a tailored knowledge distillation paradigm. Additionally, we propose Dice Projected Gradient Descent (DPGD), an adversarial example generalization method optimized for effective attack. Our extensive experiments demonstrate that the DARD approach consistently outperforms adversarially trained networks with the same architecture, achieving superior robustness and standard accuracy.
arXiv.org Artificial Intelligence
Sep-16-2025
- Country:
- Asia
- India (0.04)
- Middle East > Jordan (0.04)
- Singapore (0.05)
- Europe > Switzerland (0.04)
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Ontario > Toronto (0.14)
- British Columbia > Metro Vancouver Regional District
- United States
- California > Los Angeles County
- Long Beach (0.04)
- Florida > Palm Beach County
- Boca Raton (0.04)
- Georgia > Richmond County
- Augusta (0.04)
- Nevada > Clark County
- Las Vegas (0.04)
- New York > New York County
- New York City (0.04)
- California > Los Angeles County
- Canada
- Asia
- Genre:
- Research Report
- New Finding (1.00)
- Promising Solution (0.66)
- Research Report
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: