Goto

Collaborating Authors

 point cloud pair




One-Inlier is First: Towards Efficient Position Encoding for Point Cloud Registration--Supplementary Material -- Fan Y ang Lin Guo Zhi Chen Wenbing Tao

Neural Information Processing Systems

In this supplementary material, we first provide the rigorous definitions of evaluation metrics (Sec. A.1), then describe the network architectures (Sec. We further provide additional ablation studies (Sec. We then discuss the broader impact (Sec. On 3DMatch and 3DLoMatch, we report Inlier Ratio (IR), Feature Matching Recall (FMR) and Registration Recall (RR).


A Appendix

Neural Information Processing Systems

In this supplementary material, we first provide detailed network architectures in Sec. Then details of metrics utilized in our experiments are demonstrated in Sec. We give more implementation details in Sec. We further introduce our utilized datasets in Sec. A.5. Limitations and broader impact are then discussed in Sec.


Cross-modal feature fusion for robust point cloud registration with ambiguous geometry

Wang, Zhaoyi, Huang, Shengyu, Butt, Jemil Avers, Cai, Yuanzhou, Varga, Matej, Wieser, Andreas

arXiv.org Artificial Intelligence

Point cloud registration has seen significant advancements with the application of deep learning techniques. However, existing approaches often overlook the potential of integrating radiometric information from RGB images. This limitation reduces their effectiveness in aligning point clouds pairs, especially in regions where geometric data alone is insufficient. When used effectively, radiometric information can enhance the registration process by providing context that is missing from purely geometric data. In this paper, we propose CoFF, a novel Cross-modal Feature Fusion method that utilizes both point cloud geometry and RGB images for pairwise point cloud registration. Assuming that the co-registration between point clouds and RGB images is available, CoFF explicitly addresses the challenges where geometric information alone is unclear, such as in regions with symmetric similarity or planar structures, through a two-stage fusion of 3D point cloud features and 2D image features. It incorporates a cross-modal feature fusion module that assigns pixel-wise image features to 3D input point clouds to enhance learned 3D point features, and integrates patch-wise image features with superpoint features to improve the quality of coarse matching. This is followed by a coarse-to-fine matching module that accurately establishes correspondences using the fused features. We extensively evaluate CoFF on four common datasets: 3DMatch, 3DLoMatch, IndoorLRS, and the recently released ScanNet++ datasets. In addition, we assess CoFF on specific subset datasets containing geometrically ambiguous cases. Our experimental results demonstrate that CoFF achieves state-of-the-art registration performance across all benchmarks, including remarkable registration recalls of 95.9% and 81.6% on the widely-used 3DMatch and 3DLoMatch datasets, respectively...(Truncated to fit arXiv abstract length)


SE3ET: SE(3)-Equivariant Transformer for Low-Overlap Point Cloud Registration

Lin, Chien Erh, Zhu, Minghan, Ghaffari, Maani

arXiv.org Artificial Intelligence

ACCEPTED JUL Y, 2024 1 SE3ET: SE(3)-Equivariant Transformer for Low-Overlap Point Cloud Registration Chien Erh Lin, Minghan Zhu, and Maani Ghaffari Abstract --Partial point cloud registration is a challenging problem in robotics, especially when the robot undergoes a large transformation, causing a significant initial pose error and a low overlap between measurements. This work proposes exploiting equivariant learning from 3D point clouds to improve registration robustness. We propose SE3ET, an SE(3)-equivariant registration framework that employs equivariant point convolution and equivariant transformer designs to learn expressive and robust geometric features. We tested the proposed registration method on indoor and outdoor benchmarks where the point clouds are under arbitrary transformations and low overlapping ratios. We also provide generalization tests and run-time performance. I NTRODUCTION P OINT cloud registration has gained significant attention recently due to advancements in 3D sensor technology and computational resources. It seeks to determine the optimal transformation between two point clouds, addressing core challenges in computer vision, computer graphics, and robotics [1], [2]. These tasks include 3D localization, 3D reconstruction, pose estimation, and simultaneous localization and mapping (SLAM) [3]. Partial-to-partial registration is widespread yet challenging in robotics applications. Many point cloud registration methods require sufficient overlap between two point clouds to find an accurate transformation [4].


CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration

Yu, Hao, Li, Fu, Saleh, Mahdi, Busam, Benjamin, Ilic, Slobodan

arXiv.org Artificial Intelligence

We study the problem of extracting correspondences between a pair of point clouds for registration. For correspondence retrieval, existing works benefit from matching sparse keypoints detected from dense points but usually struggle to guarantee their repeatability. To address this issue, we present CoFiNet - Coarse-to-Fine Network which extracts hierarchical correspondences from coarse to fine without keypoint detection. On a coarse scale and guided by a weighting scheme, our model firstly learns to match down-sampled nodes whose vicinity points share more overlap, which significantly shrinks the search space of a consecutive stage. On a finer scale, node proposals are consecutively expanded to patches that consist of groups of points together with associated descriptors. Point correspondences are then refined from the overlap areas of corresponding patches, by a density-adaptive matching module capable to deal with varying point density. Extensive evaluation of CoFiNet on both indoor and outdoor standard benchmarks shows our superiority over existing methods. Especially on 3DLoMatch where point clouds share less overlap, CoFiNet significantly outperforms state-of-the-art approaches by at least 5% on Registration Recall, with at most two-third of their parameters.