Learning Robotic Navigation from Experience: Principles, Methods, and Recent Results
–arXiv.org Artificial Intelligence
Navigation represents one of the most heavily studied topics in robotics [3]. It is often approached in terms of mapping and planning: constructing a geometric representation of the world from observations, then planning through this model using motion planning algorithms [4-6]. However, such geometric approaches abstract away significant physical and semantic aspects of the navigation problem that in practice leave a range of real-world situations difficult to handle (see Figure 1). These challenges require special handling, resulting in complex systems with many components. Some works have sought to incorporate machine learning techniques to either learn navigational skills from simulation or to learn perception systems for navigation for human-provided labels. In this article, we instead argue that learned navigational models, trained directly on real-world experience rather than human-provided labels or simulators, provide the most promising long-term direction for a general solution to navigation. We refer to such learning approaches as experiential learning, because they learn directly from past experience of performing real-world navigation. As we will discuss in Section 2, such methods relate closely to reinforcement learning.
arXiv.org Artificial Intelligence
Dec-13-2022
- Country:
- North America > United States
- California > Alameda County
- Berkeley (0.04)
- Indiana > Wayne County
- Richmond (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- California > Alameda County
- South America > Uruguay
- North America > United States
- Genre:
- Instructional Material (0.67)
- Research Report (0.64)
- Industry:
- Education (0.46)
- Technology: