MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational Agents
Yang, Wanqi, Li, Yanda, Fang, Meng, Chen, Ling
–arXiv.org Artificial Intelligence
Understanding temporal dynamics is critical for conversational agents, enabling effective content analysis and informed decision-making. However, time-aware datasets, particularly for persona-grounded conversations, are still limited, which narrows their scope and diminishes their complexity. To address this gap, we introduce MTPChat, a multimodal, time-aware persona dialogue dataset that integrates linguistic, visual, and temporal elements within dialogue and persona memory. Leveraging MTPChat, we propose two time-sensitive tasks: Temporal Next Response Prediction (TNRP) and Temporal Grounding Memory Prediction (TGMP), both designed to assess a model's ability to understand implicit temporal cues and dynamic interactions. Additionally, we present an innovative framework featuring an adaptive temporal module to effectively integrate multimodal streams and capture temporal dependencies. Experimental results validate the challenges posed by MTPChat and demonstrate the effectiveness of our framework in multimodal time-sensitive scenarios.
arXiv.org Artificial Intelligence
Feb-9-2025
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education > Educational Setting (0.46)
- Information Technology (0.46)
- Technology: