Goto

Collaborating Authors

Extending Adversarial Attacks and Defenses to Deep 3D Point Cloud Classifiers

arXiv.org Machine Learning

3D object classification and segmentation using deep neural networks has been extremely successful. As the problem of identifying 3D objects has many safety-critical applications, the neural networks have to be robust against adversarial changes to the input data set. There is a growing body of research on generating human-imperceptible adversarial attacks and defenses against them in the 2D image classification domain. However, 3D objects have various differences with 2D images, and this specific domain has not been rigorously studied so far. We present a preliminary evaluation of adversarial attacks on deep 3D point cloud classifiers, namely PointNet and PointNet++, by evaluating both white-box and black-box adversarial attacks that were proposed for 2D images and extending those attacks to reduce the perceptibility of the perturbations in 3D space. We also show the high effectiveness of simple defenses against those attacks by proposing new defenses that exploit the unique structure of 3D point clouds. Finally, we attempt to explain the effectiveness of the defenses through the intrinsic structures of both the point clouds and the neural network architectures. Overall, we find that networks that process 3D point cloud data are weak to adversarial attacks, but they are also more easily defensible compared to 2D image classifiers. Our investigation will provide the groundwork for future studies on improving the robustness of deep neural networks that handle 3D data.


Learning Saliency Maps for Adversarial Point-Cloud Generation

arXiv.org Artificial Intelligence

3D point-cloud recognition with deep neural network (DNN) has received remarkable progress on obtaining both high-accuracy recognition and robustness to random point missing (or dropping). However, the robustness of DNNs to maliciously-manipulated point missing is still unclear. In this paper, we show that point-missing can be a critical security concern by proposing a {\em malicious point-dropping method} to generate adversarial point clouds to fool DNNs. Our method is based on learning a saliency map for a whole point cloud, which assigns each point a score reflecting its contribution to the model-recognition loss, i.e., the difference between the losses with and without the specific point respectively. The saliency map is learnt by approximating the nondifferentiable point-dropping process with a differentiable procedure of shifting points towards the cloud center. In this way, the loss difference, i.e., the saliency score for each point in the map, can be measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates. Based on the learned saliency map, maliciously point-dropping attack can be achieved by dropping points with the highest scores, leading to significant increase of model loss and thus inferior classification performance. Extensive evaluations on several state-of-the-art point-cloud recognition models, including PointNet, PointNet++ and DGCNN, demonstrate the efficacy and generality of our proposed saliency-map-based point-dropping scheme. Code for experiments is released on \url{https://github.com/tianzheng4/Learning-PointCloud-Saliency-Maps}.


On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks

arXiv.org Machine Learning

While deep learning in 3D domain has achieved revolutionary performance in many tasks, the robustness of these models has not been sufficiently studied or explored. Regarding the 3D adversarial samples, most existing works focus on manipulation of local points, which may fail to invoke the global geometry properties, like robustness under linear projection that preserves the Euclidean distance, i.e., isometry. In this work, we show that existing state-of-the-art deep 3D models are extremely vulnerable to isometry transformations. Armed with the Thompson Sampling, we develop a black-box attack with success rate over 95\% on ModelNet40 data set. Incorporating with the Restricted Isometry Property, we propose a novel framework of white-box attack on top of spectral norm based perturbation. In contrast to previous works, our adversarial samples are experimentally shown to be strongly transferable. Evaluated on a sequence of prevailing 3D models, our white-box attack achieves success rates from 98.88\% to 100\%. It maintains a successful attack rate over 95\% even within an imperceptible rotation range $[\pm 2.81^{\circ}]$.


Adversarial Attack and Defense on Point Sets

arXiv.org Artificial Intelligence

Emergence of the utility of 3D point cloud data in critical vision tasks (e.g., ADAS) urges researchers to pay more attention to the robustness of 3D representations and deep networks. To this end, we develop an attack and defense scheme, dedicated to 3D point cloud data, for preventing 3D point clouds from manipulated as well as pursuing noise-tolerable 3D representation. A set of novel 3D point cloud attack operations are proposed via pointwise gradient perturbation and adversarial point attachment / detachment. We then develop a flexible perturbation-measurement scheme for 3D point cloud data to detect potential attack data or noisy sensing data. Extensive experimental results on common point cloud benchmarks demonstrate the validity of the proposed 3D attack and defense framework.


Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

arXiv.org Machine Learning

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions. Recent studies show that adversarial examples can pose a threat to real-world security-critical applications: a "physically adversarial Stop Sign" can be synthesized such that the autonomous driving cars will misrecognize it as others (e.g., a speed limit sign). However, these image-based adversarial examples cannot easily alter 3D scans such as widely equipped LiDAR or radar on autonomous vehicles. In this paper, we reveal the potential vulnerabilities of LiDAR-based autonomous driving detection systems, by proposing an optimization based approach LiDAR-Adv to generate real-world adversarial objects that can evade the LiDAR-based detection systems under various conditions. We first explore the vulnerabilities of LiDAR using an evolutionbased blackbox attack algorithm, and then propose a strong attack strategy, using our gradient-based approach LiDAR-Adv. We test the generated adversarial objects on the Baidu Apollo autonomous driving platform and show that such physical systems are indeed vulnerable to the proposed attacks. We 3D-print our adversarial objects and perform physical experiments with LiDAR equipped cars to illustrate the effectiveness of LiDAR-Adv. Please find more visualizations and physical experimental results on this website: https://sites.google.com/view/lidar-adv.