Decision Making for Human-in-the-loop Robotic Agents via Uncertainty-Aware Reinforcement Learning

Singi, Siddharth, He, Zhanpeng, Pan, Alvin, Patel, Sandip, Sigurdsson, Gunnar A., Piramuthu, Robinson, Song, Shuran, Ciocarlie, Matei

arXiv.org Artificial Intelligence 

Abstract-- In a Human-in-the-Loop paradigm, a robotic agent is able to act mostly autonomously in solving a task, but can request help from an external expert when needed. In this paper, we present a Reinforcement Learning based approach to this problem, where a semi-autonomous agent asks for external assistance when it has low confidence in the eventual success of the task. We show that this estimate can Figure 1: An illustration of HULA, the method we propose in this be iteratively improved during training using a Bellman-like paper. On discrete navigation problems with both fullyand help of an expert (A) cannot localize itself accurately due to partial partially-observable state information, we show that our observability, goes down the wrong passage and fails to reach the method makes effective use of a limited budget of expert calls target. A HULA agent (B) decides to request assistance from an at run-time, despite having no access to the expert at training available external expert in the states marked with a red E and time.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found