Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
Kahn, Gregory, Villaflor, Adam, Ding, Bosen, Abbeel, Pieter, Levine, Sergey
–arXiv.org Artificial Intelligence
Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg
arXiv.org Artificial Intelligence
May-17-2018
- Country:
- North America > United States > California (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Transportation (0.46)
- Technology: