Model Mimic Attack: Knowledge Distillation for Provably Transferable Adversarial Examples

Lukyanov, Kirill, Perminov, Andrew, Turdakov, Denis, Pautov, Mikhail

arXiv.org Artificial Intelligence 

The vulnerability of artificial neural networks to adversarial perturbations in the blackbox setting is widely studied in the literature. The majority of attack methods to construct these perturbations suffer from an impractically large number of queries required to find an adversarial example. In this work, we focus on knowledge distillation as an approach to conduct transfer-based black-box adversarial attacks and propose an iterative training of the surrogate model on an expanding dataset. This work is the first, to our knowledge, to provide provable guarantees on the success of knowledge distillation-based attack on classification neural networks: we prove that if the student model has enough learning capabilities, the attack on the teacher model is guaranteed to be found within the finite number of distillation iterations. The robustness of deep neural networks to input perturbations is a crucial property to integrate them into various safety-demanding areas of machine learning, such as self-driving cars, medical diagnostics, and finances. Although neural networks are expected to produce similar outputs for similar inputs, they are long known to be vulnerable to adversarial perturbations [Szegedy et al. (2014)] - small, carefully crafted input transformations that do not change the semantics of the input object, but force a model to produce a predefined decision.