AV-Dialog: Spoken Dialogue Models with Audio-Visual Input
Chen, Tuochao, Veluri, Bandhav, Gong, Hongyu, Gollakota, Shyamnath
–arXiv.org Artificial Intelligence
Dialogue models falter in noisy, multi-speaker environments, often producing irrelevant responses and awkward turn-taking. We present AV-Dialog, the first multimodal dialog framework that uses both audio and visual cues to track the target speaker, predict turn-taking, and generate coherent responses. By combining acoustic tokenization with multi-task, multi-stage training on monadic, synthetic, and real audio-visual dialogue datasets, AV-Dialog achieves robust streaming transcription, semantically grounded turn-boundary detection and accurate responses, resulting in a natural conversational flow. Experiments show that AV-Dialog outperforms audio-only models under interference, reducing transcription errors, improving turn-taking prediction, and enhancing human-rated dialogue quality. These results highlight the power of seeing as well as hearing for speaker-aware interaction, paving the way for {spoken} dialogue agents that perform {robustly} in real-world, noisy environments.
arXiv.org Artificial Intelligence
Nov-17-2025
- Country:
- Asia
- North America > United States
- Florida > Miami-Dade County
- Miami (0.04)
- Virginia (0.04)
- Florida > Miami-Dade County
- Genre:
- Research Report (0.82)
- Technology: