A Random Ensemble of Encrypted Vision Transformers for Adversarially Robust Defense
Iijima, Ryota, Shiota, Sayaka, Kiya, Hitoshi
–arXiv.org Artificial Intelligence
Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In previous studies, the use of models encrypted with a secret key was demonstrated to be robust against white-box attacks, but not against black-box ones. In this paper, we propose a novel method using the vision transformer (ViT) that is a random ensemble of encrypted models for enhancing robustness against both white-box and black-box attacks. In addition, a benchmark attack method, called AutoAttack, is applied to models to test adversarial robustness objectively. In experiments, the method was demonstrated to be robust against not only white-box attacks but also black-box ones in an image classification task on the CIFAR-10 and ImageNet datasets. The method was also compared with the state-of-the-art in a standardized benchmark for adversarial robustness, RobustBench, and it was verified to outperform conventional defenses in terms of clean accuracy and robust accuracy.
arXiv.org Artificial Intelligence
Feb-11-2024
- Country:
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Middle East > Jordan (0.04)
- Japan > Honshū
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Ontario > Toronto (0.14)
- British Columbia > Metro Vancouver Regional District
- United States > California
- Los Angeles County > Long Beach (0.04)
- San Diego County > San Diego (0.04)
- Canada
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (0.90)
- Transportation (0.77)
- Technology: