Robust LiDAR-Camera Calibration with 2D Gaussian Splatting
Zhou, Shuyi, Xie, Shuxiang, Ishikawa, Ryoichi, Oishi, Takeshi
–arXiv.org Artificial Intelligence
-- LiDAR-camera systems have become increasingly popular in robotics recently. A critical and initial step in integrating the LiDAR and camera data is the calibration of the LiDAR-camera system. Most existing calibration methods rely on auxiliary target objects, which often involve complex manual operations, whereas targetless methods have yet to achieve practical effectiveness. Recognizing that 2D Gaussian Splatting (2DGS) can reconstruct geometric information from camera image sequences, we propose a calibration method that estimates LiDAR-camera extrinsic parameters using geometric constraints. The proposed method begins by reconstructing colorless 2DGS using LiDAR point clouds. Subsequently, we update the colors of the Gaussian splats by minimizing the photometric loss. The extrinsic parameters are optimized during this process. Additionally, we address the limitations of the photometric loss by incorporating the reprojection and triangulation losses, thereby enhancing the calibration robustness and accuracy. I. INTRODUCTION LiDAR-camera fusion plays a critical role in autonomous driving and robotics. By integrating accurate depth measurements from LiDAR with dense optical scans provided by cameras, we can develop robust solutions for various tasks, including object detection [1], simultaneous localization and mapping (SLAM) [2], and 3D reconstruction [3].
arXiv.org Artificial Intelligence
Apr-1-2025
- Country:
- Asia > Japan
- Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Germany
- Berlin (0.04)
- North America > United States
- California > Alameda County > Berkeley (0.04)
- Asia > Japan
- Genre:
- Research Report (0.64)
- Technology: