Goto

Collaborating Authors

On Physical Adversarial Patches for Object Detection

arXiv.org Machine Learning

In this paper, we demonstrate a physical adversarial patch attack against object detectors, notably the YOLOv3 detector. Unlike previous work on physical object detection attacks, which required the patch to overlap with the objects being misclassified or avoiding detection, we show that a properly designed patch can suppress virtually all the detected objects in the image. That is, we can place the patch anywhere in the image, causing all existing objects in the image to be missed entirely by the detector, even those far away from the patch itself. This in turn opens up new lines of physical attacks against object detection systems, which require no modification of the objects in a scene. A demo of the system can be found at https://youtu.be/WXnQjbZ1e7Y.


Adversarial Patch

@machinelearnbot

A real-world attack on VGG16, using a physical patch generated by the white-box ensemble method described in the Adversarial Patch paper. Our adversarial patch is more effective than a picture of a real toaster, even when the patch is significantly smaller than the toaster.


This colorful printed patch makes you pretty much invisible to AI

#artificialintelligence

The rise of AI-powered surveillance is extremely worrying. The ability of governments to track and identify citizens en masse could spell an end to public anonymity. But as researchers have shown time and time again, there are ways to trick such systems. The latest example comes from a group of engineers from the university of KU Leuven in Belgium. In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that's designed to recognize people in images.


Adversarial Patch Camouflage against Aerial Detection

arXiv.org Artificial Intelligence

Detection of military assets on the ground can be performed by applying deep learning-based object detectors on drone surveillance footage. The traditional way of hiding military assets from sight is camouflage, for example by using camouflage nets. However, large assets like planes or vessels are difficult to conceal by means of traditional camouflage nets. An alternative type of camouflage is the direct misleading of automatic object detectors. Recently, it has been observed that small adversarial changes applied to images of the object can produce erroneous output by deep learning-based detectors. In particular, adversarial attacks have been successfully demonstrated to prohibit person detections in images, requiring a patch with a specific pattern held up in front of the person, thereby essentially camouflaging the person for the detector. Research into this type of patch attacks is still limited and several questions related to the optimal patch configuration remain open. This work makes two contributions. First, we apply patch-based adversarial attacks for the use case of unmanned aerial surveillance, where the patch is laid on top of large military assets, camouflaging them from automatic detectors running over the imagery. The patch can prevent automatic detection of the whole object while only covering a small part of it. Second, we perform several experiments with different patch configurations, varying their size, position, number and saliency. Our results show that adversarial patch attacks form a realistic alternative to traditional camouflage activities, and should therefore be considered in the automated analysis of aerial surveillance imagery.