Modality Selection and Skill Segmentation via Cross-Modality Attention
Jiang, Jiawei, Ota, Kei, Jha, Devesh K., Kanezaki, Asako
–arXiv.org Artificial Intelligence
Incorporating additional sensory modalities such as tactile and audio into foundational robotic models poses significant challenges due to the curse of dimensionality. This work addresses this issue through modality selection. We propose a cross-modality attention (CMA) mechanism to identify and selectively utilize the modalities that are most informative for action generation at each timestep. Furthermore, we extend the application of CMA to segment primitive skills from expert demonstrations and leverage this segmentation to train a hierarchical policy capable of solving long-horizon, contact-rich manipulation tasks.
arXiv.org Artificial Intelligence
Apr-22-2025
- Country:
- Asia > Japan
- Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Europe > Netherlands
- South Holland > Delft (0.04)
- North America > United States (0.05)
- Asia > Japan
- Genre:
- Research Report (0.68)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence