Goto

Collaborating Authors

 ab ba




Lei Ma

Neural Information Processing Systems

In addition to above methods, the second group comparison contains additive-perturbation-based attacks, i.e., Interpretation-based noise (Interp


Lei Ma

Neural Information Processing Systems

The state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples with additive random noise-like perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks ( e.g ., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (AB BA).