LT-Exosense: A Vision-centric Multi-session Mapping System for Lifelong Safe Navigation of Exoskeletons

Wang, Jianeng, Mattamala, Matias, Kassab, Christina, Chebrolu, Nived, Burger, Guillaume, Elnecave, Fabio, Petriaux, Marine, Fallon, Maurice

arXiv.org Artificial Intelligence 

Figure 1: L T -Exosense is capable of merging multiple sessions generated by a previous work, Exosense, a vision-centric scene understanding system with its sensing unit (T op-Right) integrated into a self-balancing exoskeleton (b). The merged map (a) contains five sessions with colored contours indicating the coverage area of each session. Such a merged map can be further converted into a navigation map, enabling obstacle-free planning spanning multiple sessions. Abstract-- Self-balancing exoskeletons offer a promising mobility solution for individuals with lower-limb disabilities. For reliable long-term operation, these exoskeletons require a perception system that is effective in changing environments. In this work, we introduce L T -Exosense, a vision-centric, multi-session mapping system designed to support long-term (semi)- autonomous navigation for exoskeleton users. L T -Exosense extends single-session mapping capabilities by incrementally fusing spatial knowledge across multiple sessions, detecting environmental changes, and updating a persistent global map. This representation enables intelligent path planning, which can adapt to newly observed obstacles and can recover previous routes when obstructions are removed. We validate L T -Exosense through several real-world experiments, demonstrating a scalable multi-session map that achieves an average point-to-point error below 5 cm when compared to ground-truth laser scans.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found