Haridevan, Amaldev
L-PR: Exploiting LiDAR Fiducial Marker for Unordered Low Overlap Multiview Point Cloud Registration
Liu, Yibo, Shan, Jinjun, Haridevan, Amaldev, Zhang, Shuo, Lin, Kejian
Point cloud registration is a prerequisite for many applications in computer vision and robotics. Most existing methods focus on pairwise registration of two point clouds with high overlap. Although there have been some methods for low overlap cases, they struggle in degraded scenarios. This paper introduces a novel framework named L-PR, designed to register unordered low overlap multiview point clouds leveraging LiDAR fiducial markers. We refer to them as LiDAR fiducial markers, but they are the same as the popular AprilTag and ArUco markers, thin sheets of paper that do not affect the 3D geometry of the environment. We first propose an improved adaptive threshold marker detection method to provide robust detection results when the viewpoints among point clouds change dramatically. Then, we formulate the unordered multiview point cloud registration problem as a maximum a-posteriori (MAP) problem and develop a framework consisting of two levels of graphs to address it. The first-level graph, constructed as a weighted graph, is designed to efficiently and optimally infer initial values of scan poses from the unordered set. The second-level graph is constructed as a factor graph. By globally optimizing the variables on the graph, including scan poses, marker poses, and marker corner positions, we tackle the MAP problem. We conduct qualitative and quantitative experiments to demonstrate that the proposed method exhibits superiority over competitors in four aspects: registration accuracy, instance reconstruction quality, localization accuracy, and robustness to the degraded scene. To benefit the community, we open-source our method and dataset at https://github.com/yorklyb/LiDAR-SFM.
Application of Ghost-DeblurGAN to Fiducial Marker Detection
Liu, Yibo, Haridevan, Amaldev, Schofield, Hunter, Shan, Jinjun
Abstract-- Feature extraction or localization based on the fiducial marker could fail due to motion blur in real-world robotic applications. To solve this problem, a lightweight generative adversarial network, named Ghost-DeblurGAN, for real-time motion deblurring is developed in this paper. Furthermore, on account that there is no existing deblurring benchmark for such task, a new large-scale dataset, York-Tag, is proposed that provides pairs of sharp/blurred images containing fiducial markers. With the proposed model trained and tested on YorkTag, it is demonstrated that when applied along with fiducial marker systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection significantly. Detected markers are labeled by red frames. However, previous studies have not systems [10], [7], [9], [8] do not take motion blur as a routine dealt with the application of deep-learning-based deblurring case, which makes the adoption of the deblurring method methods in fiducial marker systems.