Learning Saliency Maps for Adversarial Point-Cloud Generation

arXiv.org Artificial Intelligence

3D point-cloud recognition with deep neural network (DNN) has received remarkable progress on obtaining both high-accuracy recognition and robustness to random point missing (or dropping). However, the robustness of DNNs to maliciously-manipulated point missing is still unclear. In this paper, we show that point-missing can be a critical security concern by proposing a {\em malicious point-dropping method} to generate adversarial point clouds to fool DNNs. Our method is based on learning a saliency map for a whole point cloud, which assigns each point a score reflecting its contribution to the model-recognition loss, i.e., the difference between the losses with and without the specific point respectively. The saliency map is learnt by approximating the nondifferentiable point-dropping process with a differentiable procedure of shifting points towards the cloud center. In this way, the loss difference, i.e., the saliency score for each point in the map, can be measured by the corresponding gradient of the loss w.r.t the point under the spherical coordinates. Based on the learned saliency map, maliciously point-dropping attack can be achieved by dropping points with the highest scores, leading to significant increase of model loss and thus inferior classification performance. Extensive evaluations on several state-of-the-art point-cloud recognition models, including PointNet, PointNet++ and DGCNN, demonstrate the efficacy and generality of our proposed saliency-map-based point-dropping scheme. Code for experiments is released on \url{https://github.com/tianzheng4/Learning-PointCloud-Saliency-Maps}.


Microsoft Promising To Bring Mixed Reality and AI to SharePoint Online -- Redmondmag.com

#artificialintelligence

Microsoft on Monday described its SharePoint vision, new features and coming attractions, kicking off this week's SharePoint North America event. The nearly two-hour keynote talk featured Jeff Teper, corporate vice president for OneDrive, SharePoint and Office, along with talks and demos by other SharePoint luminaries. As usual with the big SharePoint events, Microsoft doled out lots of details. This article just focuses on the broad strokes. The general themes of the talk were about the increased use of artificial intelligence (AI) in SharePoint, as well as mixed reality, including a new "Shared Spaces" three-dimensional capability for sites.


Microsoft to add mixed reality support to SharePoint with SharePoint Spaces

ZDNet

Microsoft is making good on its commitment to bring mixed-reality and artificial intelligence capabilities to all its products with a preview version of "SharePoint Spaces," as well as new AI enhancements coming to SharePoint and OneDrive. SharePoint Spaces will allow SharePoint users to create and consume mixed-reality 3D "spaces," such as the one in the image embedded in this post above, where they can visualize and interact with data and product models. Microsoft unveiled the new technology on May 21 during the SharePoint Conference North America opening keynote. To view/manipulate a mixed reality SharePoint Space, users can -- but don't need to -- use a HoloLens augmented reality or other type of Windows Mixed Reality headset. Microsoft is designing SharePoint Spaces to work in a browser without a headset, as well.


PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

Neural Information Processing Systems

Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.


Multiview Based 3D Scene Understanding On Partial Point Sets

arXiv.org Machine Learning

Deep learning within the context of point clouds has gained much research interest in recent years mostly due to the promising results that have been achieved on a number of challenging benchmarks, such as 3D shape recognition and scene semantic segmentation. In many realistic settings however, snapshots of the environment are often taken from a single view, which only contains a partial set of the scene due to the field of view restriction of commodity cameras. 3D scene semantic understanding on partial point clouds is considered as a challenging task. In this work, we propose a processing approach for 3D point cloud data based on a multiview representation of the existing 360{\deg} point clouds. By fusing the original 360{\deg} point clouds and their corresponding 3D multiview representations as input data, a neural network is able to recognize partial point sets while improving the general performance on complete point sets, resulting in an overall increase of 31.9% and 4.3% in segmentation accuracy for partial and complete scene semantic understanding, respectively. This method can also be applied in a wider 3D recognition context such as 3D part segmentation.