Goto

Collaborating Authors

Adversarial Training against Location-Optimized Adversarial Patches

arXiv.org Machine Learning

Deep neural networks have been shown to be susceptible to adversarial examples -- small, imperceptible changes constructed to cause mis-classification in otherwise highly accurate image classifiers. As a practical alternative, recent work proposed so-called adversarial patches: clearly visible, but adversarially crafted rectangular patches in images. These patches can easily be printed and applied in the physical world. While defenses against imperceptible adversarial examples have been studied extensively, robustness against adversarial patches is poorly understood. In this work, we first devise a practical approach to obtain adversarial patches while actively optimizing their location within the image. Then, we apply adversarial training on these location-optimized adversarial patches and demonstrate significantly improved robustness on CIFAR10 and GTSRB. Additionally, in contrast to adversarial training on imperceptible adversarial examples, our adversarial patch training does not reduce accuracy.


On Physical Adversarial Patches for Object Detection

arXiv.org Machine Learning

In this paper, we demonstrate a physical adversarial patch attack against object detectors, notably the YOLOv3 detector. Unlike previous work on physical object detection attacks, which required the patch to overlap with the objects being misclassified or avoiding detection, we show that a properly designed patch can suppress virtually all the detected objects in the image. That is, we can place the patch anywhere in the image, causing all existing objects in the image to be missed entirely by the detector, even those far away from the patch itself. This in turn opens up new lines of physical attacks against object detection systems, which require no modification of the objects in a scene. A demo of the system can be found at https://youtu.be/WXnQjbZ1e7Y.


DPAttack: Diffused Patch Attacks against Universal Object Detection

arXiv.org Artificial Intelligence

Recently, deep neural networks (DNNs) have been widely and successfully used in Object Detection, e.g. Faster RCNN, YOLO, CenterNet. However, recent studies have shown that DNNs are vulnerable to adversarial attacks. Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks. While these attacks add perturbations to a large number of pixels in images, we proposed a diffused patch attack (\textbf{DPAttack}) to successfully fool object detectors by diffused patches of asteroid-shaped or grid-shape, which only change a small number of pixels. Experiments show that our DPAttack can successfully fool most object detectors with diffused patches and we get the second place in the Alibaba Tianchi competition: Alibaba-Tsinghua Adversarial Challenge on Object Detection. Our code can be obtained from https://github.com/Wu-Shudeng/DPAttack.



RPATTACK: Refined Patch Attack on General Object Detectors

arXiv.org Artificial Intelligence

Nowadays, general object detectors like YOLO and Faster R-CNN as well as their variants are widely exploited in many applications. Many works have revealed that these detectors are extremely vulnerable to adversarial patch attacks. The perturbed regions generated by previous patch-based attack works on object detectors are very large which are not necessary for attacking and perceptible for human eyes. To generate much less but more efficient perturbation, we propose a novel patch-based method for attacking general object detectors. Firstly, we propose a patch selection and refining scheme to find the pixels which have the greatest importance for attack and remove the inconsequential perturbations gradually. Then, for a stable ensemble attack, we balance the gradients of detectors to avoid over-optimizing one of them during the training phase. Our RPAttack can achieve an amazing missed detection rate of 100% for both Yolo v4 and Faster R-CNN while only modifies 0.32% pixels on VOC 2007 test set. Our code is available at https://github.com/VDIGPKU/RPAttack.