Voice Activity Projection Model with Multimodal Encoders
Saga, Takeshi, Pelachaud, Catherine
–arXiv.org Artificial Intelligence
Turn-taking management is crucial for any social interaction. Still, it is challenging to model human-machine interaction due to the complexity of the social context and its multimodal nature. Unlike conventional systems based on silence duration, previous existing voice activity projection (VAP) models successfully utilized a unified representation of turn-taking behaviors as prediction targets, which improved turn-taking prediction performance. Recently, a multimodal VAP model outperformed the previous state-of-the-art model by a significant margin. In this paper, we propose a multimodal model enhanced with pre-trained audio and face encoders to improve performance by capturing subtle expressions. Our model performed competitively, and in some cases, even better than state-of-the-art models on turn-taking metrics. All the source codes and pretrained models are available at https://github.com/sagatake/VAPwithAudioFaceEncoders.
arXiv.org Artificial Intelligence
Jun-5-2025
- Country:
- Europe > United Kingdom (0.14)
- North America > United States (0.16)
- Genre:
- Research Report (1.00)
- Technology: