Learning to See Physical Properties with Active Sensing Motor Policies
Margolis, Gabriel B., Fu, Xiang, Ji, Yandong, Agrawal, Pulkit
–arXiv.org Artificial Intelligence
In recent years, legged locomotion controllers have exhibited remarkable stability and control across a wide range of terrains such as pavement, grass, sand, ice, slopes, and stairs [1, 2, 3, 4, 5, 6, 7, 8]. State-of-the-art approaches using sim-to-real learning primarily rely on proprioception and depth sensing to perceive obstacles and terrain [5, 7, 8, 9, 10, 11, 12, 13, 14, 15]. These approaches discard valuable information about the terrain's material properties beyond geometry, such as slip, softness, etc., conveyed by color images. A primary reason for this choice is that sim-to-real transfer has been shown to work with depth images [5, 7, 10], but it remains unclear how well the transfer will work with color or RGB images. To utilize information beyond geometry, some works learn to predict task performance or task-relevant properties (e.g., traversability) from color images using data collected in the real world [16, 17, 18, 19, 20]. However, the terrain property predictors learned in prior works are task-or policy-specific, which limits their applicability to new tasks. To perceive a multipurpose representation of the terrain, we propose predicting the terrain's physical properties (e.g., friction, roughness) that are invariant to the policy and task.
arXiv.org Artificial Intelligence
Nov-2-2023
- Country:
- North America > United States (1.00)
- Genre:
- Research Report (1.00)
- Industry:
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning (1.00)
- Robots
- Autonomous Vehicles > Drones (0.46)
- Locomotion (1.00)
- Vision (1.00)
- Sensing and Signal Processing > Image Processing (1.00)
- Artificial Intelligence
- Information Technology