Guiding Search in Continuous State-Action Spaces by Learning an Action Sampler From Off-Target Search Experience
Kim, Beomjoon (Massachusetts Institute of Technology) | Kaelbling, Leslie Pack (Massachusetts Institute of Technology) | Lozano-Pérez, Tomás (Massachusetts Institute of Technology)
In robotics, it is essential to be able to plan efficiently in high-dimensional continuous state-action spaces for long horizons. For such complex planning problems, unguided uniform sampling of actions until a path to a goal is found is hopelessly inefficient, and gradient-based approaches often fall short when the optimization manifold of a given problem is not smooth. In this paper, we present an approach that guides search in continuous spaces for generic planners by learning an action sampler from past search experience. We use a Generative Adversarial Network (GAN) to represent an action sampler, and address an important issue: search experience consists of a relatively large number of actions that are not on a solution path and a relatively small number of actions that actually are on a solution path. We introduce a new technique, based on an importance-ratio estimation method, for using samples from a non-target distribution to make GAN learning more data-efficient. We provide theoretical guarantees and empirical evaluation in three challenging continuous robot planning problems to illustrate the effectiveness of our algorithm.
Feb-8-2018
- Country:
- North America > United States > Massachusetts (0.14)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks (1.00)
- Representation & Reasoning
- Planning & Scheduling (0.87)
- Search (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence