Fooling deep neural networks for object detection with adversarial 3-D logos – IAM Network

#artificialintelligence 

Examples of the researchers' 3D adversarial logo attack using different 3D object meshes, with the aim of fooling a YOLOV2 detector. Over the past decade, researchers have developed a growing number of deep neural networks that can be trained to complete a variety of tasks, including recognizing people or objects in images. While many of these computational techniques have achieved remarkable results, they can sometimes be fooled into misclassifying data. An adversarial attack is a type of cyberattack that specifically targets deep neural networks, tricking them into misclassifying data. It does this by creating adversarial data that closely resembles and yet differs from the data typically analyzed by a deep neural network, prompting the network to make incorrect predictions, failing to recognize the slight differences between real and adversarial data.