RE-MOVE: An Adaptive Policy Design for Robotic Navigation Tasks in Dynamic Environments via Language-Based Feedback
Chakraborty, Souradip, Weerakoon, Kasun, Poddar, Prithvi, Elnoor, Mohamed, Narayanan, Priya, Busart, Carl, Tokekar, Pratap, Bedi, Amrit Singh, Manocha, Dinesh
–arXiv.org Artificial Intelligence
Abstract-- Reinforcement learning-based policies for continuous control robotic navigation tasks often fail to adapt to changes in the environment during real-time deployment, which may result in catastrophic failures. To address this limitation, we propose a novel approach called RE-MOVE (REquest help and MOVE on) to adapt already trained policy to real-time changes in the environment without re-training via utilizing a language-based feedback. The proposed approach essentially boils down to addressing two main challenges of (1) when to ask for feedback and, if received, (2) how to incorporate feedback into trained policies. RE-MOVE incorporates an epistemic uncertainty-based framework to determine the optimal time to request instructions-based feedback. This figure shows robot navigation using our RE-MOVE processing (NLP) paradigm with efficient, prompt design and approach with a language-based feedback scenario. To in dynamic scenes, RE-MOVE identifies the uncertainties show the efficacy of the proposed approach, we performed that appear in the observation space (i.e., a LiDAR laser scanbased extensive synthetic and real-world evaluations in several testtime 2D cost map in our context) and requests assistance from a dynamic navigation scenarios. Such assistance is essential in scenarios where the laser scan in up to 80% enhancement in the attainment of successful goals, misleadingly detects pliable regions (i.e., perceptually deceptive yet coupled with a reduction of 13.50% in the normalized trajectory navigable objects such as hanging clothes, curtains, thin tall grass, length, as compared to alternative approaches, particularly in etc.) as solid obstacles due to the sensing limitations of the LiDAR. To tackle this, we quantify epistemic uncertainty Reinforcement learning (RL) has gained popularity for precisely, considering specific design considerations within navigating complex, dynamic environments [1].
arXiv.org Artificial Intelligence
Sep-17-2023
- Country:
- North America > United States
- Maryland > Prince George's County (0.14)
- New York (0.28)
- North America > United States
- Genre:
- Research Report
- New Finding (0.34)
- Promising Solution (0.34)
- Research Report
- Industry:
- Transportation (0.46)
- Technology: