Learning to See Physical Properties with Active Sensing Motor Policies

Margolis, Gabriel B., Fu, Xiang, Ji, Yandong, Agrawal, Pulkit

arXiv.org Artificial Intelligence 

In recent years, legged locomotion controllers have exhibited remarkable stability and control across a wide range of terrains such as pavement, grass, sand, ice, slopes, and stairs [1, 2, 3, 4, 5, 6, 7, 8]. State-of-the-art approaches using sim-to-real learning primarily rely on proprioception and depth sensing to perceive obstacles and terrain [5, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These approaches discard valuable information about the terrain's material properties beyond geometry, such as slip, softness, etc., conveyed by color images. A primary reason for this choice is that sim-to-real transfer has been shown to work with depth images [5, 7, 10], but it remains unclear how well the transfer will work with color or RGB images. To utilize information beyond geometry, some works learn to predict task performance or task-relevant properties (e.g., traversability) from color images using data collected in the real world [16, 17, 18, 19, 20]. However, the terrain property predictors learned in prior works are task-or policy-specific, which limits their applicability to new tasks. To perceive a multipurpose representation of the terrain, we propose predicting the terrain's physical properties (e.g., friction, roughness) that are invariant to the policy and task.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found