Universal, transferable and targeted adversarial attacks
Deep Neural Network has been found vulnerable recently. A kind of well-designed inputs, which called adversarial examples, can lead the networks to make incorrect predictions. Depending on the different scenarios, goals and capabilities, the difficulty to generate the attack is different. For example, generating a targeted attack is more difficult than a non-targeted attack, a universal attack is more difficult than a non-universal attack, a transferable attack is more difficult than a nontransferable one. The question is: Is there exist an attack that can survival in the most harsh adversity to meet all these requirements. Although many cheap and effective attacks have been proposed, this question is still not completely solved over large models and large scale dataset. In this paper, we learn a universal mapping from the sources to the adversarial examples. These examples can fool classification networks into classifying all of them to one targeted class. Besides, they are also transferable between different models.
Sep-13-2019
- Genre:
- Research Report (0.50)
- Industry:
- Government > Military (0.71)
- Information Technology > Security & Privacy (0.85)
- Technology: