Leveraging Video Descriptions to Learn Video Question Answering

Zeng, Kuo-Hao (Stanford University and National Tsing Hua University) | Chen, Tseng-Hung (National Tsing Hua University) | Chuang, Ching-Yao (National Tsing Hua University) | Liao, Yuan-Hong (National Tsing Hua University) | Niebles, Juan Carlos (Stanford University) | Sun, Min (National Tsing Hua University)

AAAI Conferences 

We propose a scalable approach to learn video-based question answering (QA): to answer a free-form natural language question about the contents of a video. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated. Next, we use these candidate QA pairs to train a number of video-based QA methods extended from MN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et al. 2015), and SS (Venugopalan et al. 2015). In order to handle non-perfect candidate QA pairs, we propose a self-paced learning procedure to iteratively identify them and mitigate their effects in training. Finally, we evaluate performance on manually generated video-based QA pairs. The results show that our self-paced learning procedure is effective, and the extended SS model outperforms various baselines.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found