Learning to Orient Surfaces by Self-supervised Spherical CNNs (Supplementary Material), Federico Stella 1, Luciano Silva

Neural Information Processing Systems

In this section, we study how the data augmentation carried out while training on local surface patches improves the robustness of Compass against self-occlusions and missing parts. To this end, we run an ablation experiment adopting the same training pipeline explained in the main paper at Section 3.2, without randomly removing points from the input cloud. As done in the main paper, we trained the model on 3DMatch and test it on 3DMatch, ETH, and Stanford Views. We compare Compass against its ablated version in terms of repeatability of the LRFs. Results for 3DMatch are shown in Table 1: the performance gain achieved by Compass when deploying the proposed data augmentation validates its importance.


Learning to Orient Surfaces by Self-supervised Spherical CNNs, Federico Stella 1, Luciano Silva

Neural Information Processing Systems

Defining and reliably finding a canonical orientation for 3D surfaces is key to many Computer Vision and Robotics applications. This task is commonly addressed by handcrafted algorithms exploiting geometric cues deemed as distinctive and robust by the designer. Yet, one might conjecture that humans learn the notion of the inherent orientation of 3D objects from experience and that machines may do so alike. In this work, we show the feasibility of learning a robust canonical orientation for surfaces represented as point clouds. Based on the observation that the quintessential property of a canonical orientation is equivariance to 3D rotations, we propose to employ Spherical CNNs, a recently introduced machinery that can learn equivariant representations defined on the Special Orthogonal group SO(3). Specifically, spherical correlations compute feature maps whose elements define 3D rotations. Our method learns such feature maps from raw data by a self-supervised training procedure and robustly selects a rotation to transform the input point cloud into a learned canonical orientation. Thereby, we realize the first end-to-end learning approach to define and extract the canonical orientation of 3D shapes, which we aptly dub Compass. Experiments on several public datasets prove its effectiveness at orienting local surface patches as well as whole objects.


we propose a more general framework that can also be adopted to orient whole objects and perform rotation-invariant

Neural Information Processing Systems

R1: We agree that defining a canonical orientation for local patches is mainly aimed at descriptor matching. Moreover, as recently shown in Bai et al. in "D3Feat: Joint Learning of Dense Detection We will add this information to the revised version. In the evaluation in Table 2, we follow the standard protocol used in [39] to perform a fair comparison. R2: We used the more general term "pose" to refer to canonical orientation as achieving translation invariance is usually As suggested, we will use only the term orientation. We will modify it in the final version of the paper.







High-Order Pooling for Graph Neural Networks with Tensor Decomposition Chenqing Hua Jian Tang McGill University;

Neural Information Processing Systems

Graph Neural Networks (GNNs) are attracting growing attention due to their effectiveness and flexibility in modeling a variety of graph-structured data. Exiting GNN architectures usually adopt simple pooling operations (e.g., sum, average, max) when aggregating messages from a local neighborhood for updating node representation or pooling node representations from the entire graph to compute the graph representation. Though simple and effective, these linear operations do not model high-order non-linear interactions among nodes. We propose the Tensorized Graph Neural Network (tGNN), a highly expressive GNN architecture relying on tensor decomposition to model high-order non-linear node interactions.