Degradation-Aware Cooperative Multi-Modal GNSS-Denied Localization Leveraging LiDAR-Based Robot Detections
Pritzl, Václav, Yu, Xianjia, Westerlund, Tomi, Štěpán, Petr, Saska, Martin
–arXiv.org Artificial Intelligence
This work has been submitted to the IEEE for possible publication. Abstract--Accurate long-term localization using onboard sensors is crucial for robots operating in Global Navigation Satellite System (GNSS)-denied environments. While complementary sensors mitigate individual degradations, carrying all the available sensor types on a single robot significantly increases the size, weight, and power demands. Distributing sensors across multiple robots enhances the deployability but introduces challenges in fusing asynchronous, multi-modal data from independently moving platforms. We propose a novel adaptive multi-modal multi-robot cooperative localization approach using a factor-graph formulation to fuse asynchronous Visual-Inertial Odome-try (VIO), LiDAR-Inertial Odometry (LIO), and 3D inter-robot detections from distinct robots in a loosely-coupled fashion. The approach adapts to changing conditions, leveraging reliable data to assist robots affected by sensory degradations. A novel interpolation-based factor enables fusion of the unsynchronized measurements. LIO degradations are evaluated based on the approximate scan-matching Hessian. A novel approach of weighting odometry data proportionally to the Wasserstein distance between the consecutive VIO outputs is proposed. A theoretical analysis is provided, investigating the cooperative localization problem under various conditions, mainly in the presence of sensory degradations. The proposed method has been extensively evaluated on real-world data gathered with heterogeneous teams of an Unmanned Ground V ehicle (UGV) and Unmanned Aerial V ehicles (UA Vs), showing that the approach provides significant improvements in localization accuracy in the presence of various sensory degradations. N Global Navigation Satellite System (GNSS)-denied environments, fusing different localization modalities is crucial to provide robustness to various environmental challenges [1]. Visual-based localization requires cheap and light-weight sensors, but it is sensitive to illumination changes and texture-less environments. This work was supported by CTU grant no SGS23/177/OHK3/3T/13, by the Czech Science Foundation (GA ˇ CR) under research project No. 23-07517S, and by the European Union under the project Robotics and advanced industrial production (reg.
arXiv.org Artificial Intelligence
Oct-24-2025
- Genre:
- Research Report > Promising Solution (0.34)
- Technology:
- Information Technology > Artificial Intelligence > Robots (1.00)