Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for Off-Road Navigation
Triest, Samuel, Castro, Mateo Guaman, Maheshwari, Parv, Sivaprakasam, Matthew, Wang, Wenshan, Scherer, Sebastian
–arXiv.org Artificial Intelligence
The process of designing costmaps for off-road driving tasks is often a challenging and engineering-intensive task. Recent work in costmap design for off-road driving focuses on training deep neural networks to predict costmaps from sensory observations using corpora of expert driving data. However, such approaches are generally subject to over-confident mispredictions and are rarely evaluated in-the-loop on physical hardware. We present an inverse reinforcement learning-based method of efficiently training deep cost functions that are uncertainty-aware. We do so by leveraging recent advances in highly parallel model-predictive control and robotic risk estimation. In addition to demonstrating improvement at reproducing expert trajectories, we also evaluate the efficacy of these methods in challenging off-road navigation scenarios. We observe that our method significantly outperforms a geometric baseline, resulting in 44% improvement in expert path reconstruction and 57% fewer interventions in practice. We also observe that varying the risk tolerance of the vehicle results in qualitatively different navigation behaviors, especially with respect to higher-risk scenarios such as slopes and tall grass.
arXiv.org Artificial Intelligence
Jan-31-2023
- Country:
- North America > United States (0.68)
- Genre:
- Research Report (1.00)
- Industry:
- Automobiles & Trucks (0.46)
- Energy > Oil & Gas (0.35)
- Technology: