Fast and Compute-efficient Sampling-based Local Exploration Planning via Distribution Learning

Schmid, Lukas, Ni, Chao, Zhong, Yuliang, Siegwart, Roland, Andersson, Olov

arXiv.org Artificial Intelligence 

Abstract-- Exploration is a fundamental problem in robotics. While sampling-based planners have shown high performance and robustness, they are oftentimes compute intensive and can exhibit high variance. To this end, we propose to learn both components of sampling-based exploration. We present a method to directly learn an underlying informed distribution of views based on the spatial context in the robot's map, and further explore a variety of methods to also learn the information gain of each sample. We show in thorough experimental evaluation that our proposed system improves exploration performance by up to 28% over classical methods, and find that learning the gains in addition to the sampling distribution can provide favorable performance vs. compute trade-offs for compute-constrained systems. We demonstrate in simulation and on a low-cost mobile robot that our system Figure 1: A small compute-constrained mobile robot exploring an generalizes well to varying environments.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found