T-BFA: Targeted Bit-Flip Adversarial Weight Attack
Rakin, Adnan Siraj, He, Zhezhi, Li, Jingtao, Yao, Fan, Chakrabarti, Chaitali, Fan, Deliang
Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip based adversarial weight Attack(BFA) injects an extremely small amount of fault into weight parameters to hijack the DNN function. Prior works on BFA are focused on-targeted attacks that can classify all inputs into a random output class by flipping a very small number of weight bits stored in computer memory. This paper proposes the first work oftargetedBFA based (T-BFA) adversarial weight attack on DNN models, which can intentionally mislead selected inputs to a target output class. The objectives achieved by identifying the weight bits that are highly associated with the classification of a targeted output through a novel class-dependent weight bit ranking algorithm. T-BFA performance has been successfully demonstrated on multiple network architectures for the image classification task. For example, by merely flipping 27 out of 88 million weight bits, T-BFA can misclassify all the imagesfrom 'Ibex' class into 'Proboscis Monkey' class (i.e., 100% attack success rate)in ImageNet dataset, while maintaining 59.35% validation accuracy on ResNet-18. Moreover, we successfully demonstrate our T-BFA attack in a real computer prototype system running DNN computation.
Sep-10-2020
- Country:
- North America > United States (0.46)
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: