REASAN: Learning Reactive Safe Navigation for Legged Robots

Yuan, Qihao, Cao, Ziyu, Cao, Ming, Li, Kailai

arXiv.org Artificial Intelligence 

Abstract-- We present a novel modularized end-to-end framework for legged reactive navigation in complex dynamic environments using a single light detection and ranging (LiDAR) sensor . The system comprises four simulation-trained modules: three reinforcement-learning (RL) policies for locomotion, safety shielding, and navigation, and a transformer-based exteroceptive estimator that processes raw point-cloud inputs. This modular decomposition of complex legged motor-control tasks enables lightweight neural networks with simple architectures, trained using standard RL practices with targeted reward shaping and curriculum design, without reliance on heuristics or sophisticated policy-switching mechanisms. We conduct comprehensive ablations to validate our design choices and demonstrate improved robustness compared to existing approaches in challenging navigation tasks. The resulting reactive safe navigation (REASAN) system achieves fully onboard and real-time reactive navigation across both single-and multi-robot settings in complex environments. We release our training and deployment code at https://github.com/ASIG-X/REASAN Legged robots offer distinct advantages given their universal mobility, with expanding application scenarios ranging over search and rescue, logistics, entertainment, industrial inspection, and forestry inventories [1]-[4]. Recent advances in quadrupedal locomotion have demonstrated remarkable performance, particularly, in handling complex static terrains [5]-[7].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found