An End-to-End Human Simulator for Task-Oriented Multimodal Human-Robot Collaboration

Shervedani, Afagh Mehri, Li, Siyu, Monaikul, Natawut, Abbasi, Bahareh, Di Eugenio, Barbara, Zefran, Milos

arXiv.org Artificial Intelligence 

This paper proposes a neural network-based user simulator that can provide a multimodal interactive environment for training Reinforcement Learning (RL) agents in collaborative tasks involving multiple modes of communication. The simulator is trained on the existing ELDERLY-AT-HOME corpus and accommodates multiple modalities such as language, pointing gestures, and haptic-ostensive actions. The paper also presents a novel multimodal data augmentation approach, which addresses the challenge of using a limited dataset due to the expensive and time-consuming nature of collecting human demonstrations. Overall, the study highlights the potential for using RL and multimodal user simulators in developing and improving domestic assistive robots.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found