Skuddis, David
3D Gaussian Splatting aided Localization for Large and Complex Indoor-Environments
Ress, Vincent, Meyer, Jonas, Zhang, Wei, Skuddis, David, Soergel, Uwe, Haala, Norbert
Recent breakthroughs in deep learning, including 3D Gaussian Splatting (3DGS) (Kerbl et al., 2024), have significantly advanced both the performance and visual quality of the reconstruction. Within our work, we focus on 3D mapping of complex, large-scale indoor environments such as construction sites and factory halls. This initiative is driven by a project within the Cluster of Excellence Integrative Computational Design and Construction for Architecture (IntCDC) at the University of Stuttgart, which aims to enable autonomous indoor construction for new or preexisting buildings (IntCDC, 2024a). Typical construction tasks, including material handling and element assembly, require highly accurate mapping approaches to enable precise localization of both building components and the construction robots. Image-based localization methods are particularly valuable due to the widespread availability and low cost of cameras, which are now standard equipment on most modern robots.
HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene Reconstruction
Zhang, Wei, Cheng, Qing, Skuddis, David, Zeller, Niclas, Cremers, Daniel, Haala, Norbert
We present HI-SLAM2, a geometry-aware Gaussian SLAM system that achieves fast and accurate monocular scene reconstruction using only RGB input. Existing Neural SLAM or 3DGS-based SLAM methods often trade off between rendering quality and geometry accuracy, our research demonstrates that both can be achieved simultaneously with RGB input alone. The key idea of our approach is to enhance the ability for geometry estimation by combining easy-to-obtain monocular priors with learning-based dense SLAM, and then using 3D Gaussian splatting as our core map representation to efficiently model the scene. Upon loop closure, our method ensures on-the-fly global consistency through efficient pose graph bundle adjustment and instant map updates by explicitly deforming the 3D Gaussian units based on anchored keyframe updates. Furthermore, we introduce a grid-based scale alignment strategy to maintain improved scale consistency in prior depths for finer depth details. Through extensive experiments on Replica, ScanNet, and ScanNet++, we demonstrate significant improvements over existing Neural SLAM methods and even surpass RGB-D-based methods in both reconstruction and rendering quality. The project page and source code will be made available at https://hi-slam2.github.io/.
DMSA -- Dense Multi Scan Adjustment for LiDAR Inertial Odometry and Global Optimization
Skuddis, David, Haala, Norbert
We propose a new method for fine registering multiple point clouds simultaneously. The approach is characterized by being dense, therefore point clouds are not reduced to pre-selected features in advance. Furthermore, the approach is robust against small overlaps and dynamic objects, since no direct correspondences are assumed between point clouds. Instead, all points are merged into a global point cloud, whose scattering is then iteratively reduced. This is achieved by dividing the global point cloud into uniform grid cells whose contents are subsequently modeled by normal distributions. We show that the proposed approach can be used in a sliding window continuous trajectory optimization combined with IMU measurements to obtain a highly accurate and robust LiDAR inertial odometry estimation. Furthermore, we show that the proposed approach is also suitable for large scale keyframe optimization to increase accuracy. We provide the source code and some experimental data on https://github.com/davidskdds/DMSA_LiDAR_SLAM.git.
SLAM for Indoor Mapping of Wide Area Construction Environments
Ress, Vincent, Zhang, Wei, Skuddis, David, Haala, Norbert, Soergel, Uwe
Simultaneous localization and mapping (SLAM), i.e., the reconstruction of the environment represented by a (3D) map and the concurrent pose estimation, has made astonishing progress. Meanwhile, large scale applications aiming at the data collection in complex environments like factory halls or construction sites are becoming feasible. However, in contrast to small scale scenarios with building interiors separated to single rooms, shop floors or construction areas require measures at larger distances in potentially texture less areas under difficult illumination. Pose estimation is further aggravated since no GNSS measures are available as it is usual for such indoor applications. In our work, we realize data collection in a large factory hall by a robot system equipped with four stereo cameras as well as a 3D laser scanner. We apply our state-of-the-art LiDAR and visual SLAM approaches and discuss the respective pros and cons of the different sensor types for trajectory estimation and dense map generation in such an environment. Additionally, dense and accurate depth maps are generated by 3D Gaussian splatting, which we plan to use in the context of our project aiming on the automatic construction and site monitoring.