Learning Black-Box Attackers with Transferable Priors and Query Feedback
Yang, Jiancheng, Jiang, Yangzhou, Huang, Xiaoyang, Ni, Bingbing, Zhao, Chenglong
–arXiv.org Artificial Intelligence
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available. Inspired by consistency of visual saliency between different vision models, a surrogate model is expected to improve the attack performance via transferability. By combining transferability-based and query-based black-box attack, we propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms several state-of-the-art methods. Moreover, to efficiently utilize the query feedback, we update the surrogate model in a novel learning scheme, named High-Order Gradient Approximation (HOGA). By constructing a high-order gradient computation graph, we update the surrogate model to approximate the victim model in both forward and backward pass. The SimBA++ and HOGA result in Learnable Black-Box Attack (LeBA), which surpasses previous state of the art by considerable margins: the proposed LeBA significantly reduces queries, while keeping higher attack success rates close to 100% in extensive ImageNet experiments, including attacking vision benchmarks and defensive models. Code is open source at https://github.com/TrustworthyDL/LeBA.
arXiv.org Artificial Intelligence
Oct-21-2020
- Country:
- Asia
- China > Shanghai
- Shanghai (0.04)
- Middle East > Jordan (0.04)
- China > Shanghai
- North America > Canada
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (0.68)
- Transportation > Air (1.00)
- Technology: