Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling
Chen, Maximillian, Sun, Ruoxi, Arık, Sercan Ö.
–arXiv.org Artificial Intelligence
Conversational assistants are increasingly popular across diverse real-world applications, highlighting the need for advanced multimodal speech modeling. Speech, as a natural mode of communication, encodes rich user-specific characteristics such as speaking rate and pitch, making it critical for effective interaction. Our work introduces a data-centric customization approach for efficiently enhancing multimodal understanding in conversational speech modeling. Central to our contributions is a novel multi-task learning paradigm that involves designing auxiliary tasks to utilize a small amount of speech data. Our approach achieves state-of-the-art performance on the Spoken-SQuAD benchmark, using only 10% of the training data with open-weight models, establishing a robust and efficient framework for audio-centric conversational modeling. We also introduce ASK-QA, the first dataset for multi-turn spoken dialogue with ambiguous user requests and dynamic evaluation inputs. Code and data forthcoming.
arXiv.org Artificial Intelligence
Dec-20-2024
- Country:
- Europe (0.93)
- North America > United States
- Minnesota (0.28)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology (0.67)
- Leisure & Entertainment (0.46)
- Transportation (0.47)
- Technology: