Goto

Collaborating Authors

Adversarial point perturbations on 3D objects

arXiv.org Machine Learning

The importance of training robust neural network grows as 3D data is increasingly utilized in deep learning for vision tasks, like autonomous driving. We examine this problem from the perspective of the attacker, which is necessary in understanding how neural networks can be exploited, and thus defended. More specifically, we propose adversarial attacks based on solving different optimization problems, like minimizing the perceptibility of our generated adversarial examples, or maintaining a uniform density distribution of points across the adversarial object surfaces. Our four proposed algorithms for attacking 3D point cloud classification are all highly successful on existing neural networks, and we find that some of them are even effective against previously proposed point removal defenses.


Learning Saliency Maps for Adversarial Point-Cloud Generation

arXiv.org Artificial Intelligence

3D point-cloud recognition with deep neural network (DNN) has received remarkable progress on obtaining both high-accuracy recognition and robustness to random point missing (or dropping). However, the robustness of DNNs to maliciously-manipulated point missing is still unclear. In this paper, we show that point-missing can be a critical security concern by proposing a {\em malicious point-dropping method} to generate adversarial point clouds to fool DNNs. Our method is based on learning a saliency map for a whole point cloud, which assigns each point a score reflecting its contribution to the model-recognition loss, i.e., the difference between the losses with and without the specific point respectively. The saliency map is learnt by approximating the nondifferentiable point-dropping process with a differentiable procedure of shifting points towards the cloud center. In this way, the loss difference, i.e., the saliency score for each point in the map, can be measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates. Based on the learned saliency map, maliciously point-dropping attack can be achieved by dropping points with the highest scores, leading to significant increase of model loss and thus inferior classification performance. Extensive evaluations on several state-of-the-art point-cloud recognition models, including PointNet, PointNet++ and DGCNN, demonstrate the efficacy and generality of our proposed saliency-map-based point-dropping scheme. Code for experiments is released on \url{https://github.com/tianzheng4/Learning-PointCloud-Saliency-Maps}.


On Isometry Robustness of Deep 3D Point Cloud Models under Adversarial Attacks

arXiv.org Machine Learning

While deep learning in 3D domain has achieved revolutionary performance in many tasks, the robustness of these models has not been sufficiently studied or explored. Regarding the 3D adversarial samples, most existing works focus on manipulation of local points, which may fail to invoke the global geometry properties, like robustness under linear projection that preserves the Euclidean distance, i.e., isometry. In this work, we show that existing state-of-the-art deep 3D models are extremely vulnerable to isometry transformations. Armed with the Thompson Sampling, we develop a black-box attack with success rate over 95\% on ModelNet40 data set. Incorporating with the Restricted Isometry Property, we propose a novel framework of white-box attack on top of spectral norm based perturbation. In contrast to previous works, our adversarial samples are experimentally shown to be strongly transferable. Evaluated on a sequence of prevailing 3D models, our white-box attack achieves success rates from 98.88\% to 100\%. It maintains a successful attack rate over 95\% even within an imperceptible rotation range $[\pm 2.81^{\circ}]$.


Adversarial Attack and Defense on Point Sets

arXiv.org Artificial Intelligence

Emergence of the utility of 3D point cloud data in critical vision tasks (e.g., ADAS) urges researchers to pay more attention to the robustness of 3D representations and deep networks. To this end, we develop an attack and defense scheme, dedicated to 3D point cloud data, for preventing 3D point clouds from manipulated as well as pursuing noise-tolerable 3D representation. A set of novel 3D point cloud attack operations are proposed via pointwise gradient perturbation and adversarial point attachment / detachment. We then develop a flexible perturbation-measurement scheme for 3D point cloud data to detect potential attack data or noisy sensing data. Extensive experimental results on common point cloud benchmarks demonstrate the validity of the proposed 3D attack and defense framework.


Utility Analysis of Network Architectures for 3D Point Cloud Processing

arXiv.org Machine Learning

Note that most widely used benchmark datasets for point cloud classification only contain foreground objects. Therefore, we generate a new dataset, where each point cloud contains both the foreground object and the background. In this new dataset, the background is composed of points that carry no relevant information of the foreground. We will introduce details in Section 5. Metric 3, rotation robustness: The rotation robustness is proposed to measure whether a DNN uses similar subsets of two point clouds to compute the intermediate-layer feature, if the two point clouds have the same shape but different orientations. Let X θ 1 and X θ 2 denote the point clouds that have the same global shape but different orientations θ 1 and θ 2. To quantify the similarity of the attention on the two point clouds, we compute the Jensen-Shannon divergence between the distributions of the perturbed inputs ˆ X θ 1 X θ 1 δ 1 and ˆ X θ 2 X θ 2 δ 2. ˆ X θ 1 and ˆ X θ 2 denote the perturbed inputs, which are computed to measure information discarding in Equation (1).