VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions
Wang, Yuxuan, Zheng, Zilong, Zhao, Xueliang, Li, Jinpeng, Wang, Yueqian, Zhao, Dongyan
–arXiv.org Artificial Intelligence
Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues. Most existing benchmarks treat both modalities the same as a frame-independent visual understanding task, while neglecting the intrinsic attributes in multimodal dialogues, such as scene and topic transitions. In this paper, we present Video-grounded Scene&Topic AwaRe dialogue (VSTAR) dataset, a large scale video-grounded dialogue understanding dataset based on 395 TV series. Based on VSTAR, we propose two benchmarks for video-grounded dialogue understanding: scene segmentation and topic segmentation, and one benchmark for video-grounded dialogue generation. Comprehensive experiments are performed on these benchmarks to demonstrate the importance of multimodal information and segments in video-grounded dialogue understanding and generation.
arXiv.org Artificial Intelligence
May-30-2023
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment > Sports (0.46)
- Media > Television (0.51)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Vision (0.69)
- Information Technology > Artificial Intelligence