FAM-HRI: Foundation-Model Assisted Multi-Modal Human-Robot Interaction Combining Gaze and Speech
Lai, Yuzhi, Yuan, Shenghai, Zhang, Boya, Kiefer, Benjamin, Li, Peizheng, Zell, Andreas
–arXiv.org Artificial Intelligence
Effective Human-Robot Interaction (HRI) is crucial for enhancing accessibility and usability in real-world robotics applications. However, existing solutions often rely on gestures or language commands, making interaction inefficient and ambiguous, particularly for users with physical impairments. In this paper, we introduce FAM-HRI, an efficient multi-modal framework for human-robot interaction that integrates language and gaze inputs via foundation models. By leveraging lightweight Meta ARIA glasses, our system captures real-time multi-modal signals and utilizes large language models (LLMs) to fuse user intention with scene context, enabling intuitive and precise robot manipulation. Our method accurately determines gaze fixation time interval, reducing noise caused by the gaze dynamic nature. Experimental evaluations demonstrate that FAM-HRI achieves a high success rate in task execution while maintaining a low interaction time, providing a practical solution for individuals with limited physical mobility or motor impairments.
arXiv.org Artificial Intelligence
Mar-10-2025
- Country:
- Europe > Germany (0.14)
- North America > United States (0.14)
- Genre:
- Questionnaire & Opinion Survey (0.68)
- Research Report (0.50)
- Technology: