"Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks via Reward Shaping
Hundt, Andrew, Killeen, Benjamin, Kwon, Heeyeon, Paxton, Chris, Hager, Gregory D.
–arXiv.org Artificial Intelligence
"Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual T asks via Reward Shaping Andrew Hundt 1, Benjamin Killeen 1, Heeyeon Kwon 1, Chris Paxton 2, and Gregory D. Hager 1 Abstract -- In order to learn effectively, robots must be able to extract the intangible context by which task progress and mistakes are defined. In the domain of reinforcement learning, much of this information is provided by the reward function. Hence, reward shaping is a necessary part of how we can achieve state-of-the-art results on complex, multi-step tasks. However, comparatively little work has examined how reward shaping should be done so that it captures task context, particularly in scenarios where the task is long-horizon and failure is highly consequential. Our Schedule for Positive T ask (SPOT) reward trains our Efficient Visual T ask (EVT) model to solve problems that require an understanding of both task context and workspace constraints of multi-step block arrangement tasks. In simulation EVT can completely clear adversarial arrangements of objects by pushing and grasping in 99% of cases vs an 82% baseline in prior work. For random arrangements EVT clears 100% of test cases at 86% action efficiency vs 61% efficiency in prior work. EVT SPOT is also able to demonstrate context understanding and complete stacks in 74% of trials compared to a baseline of 5% with EVT alone.
arXiv.org Artificial Intelligence
Sep-25-2019