VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
Chen, Xilun, Yu, Lili, Xiong, Wenhan, Oğuz, Barlas, Mehdad, Yashar, Yih, Wen-tau
–arXiv.org Artificial Intelligence
We propose a new two-stage pre-training framework for video-to-text generation tasks such as video captioning and video question answering: A generative encoder-decoder model is first jointly pre-trained on massive image-text data to learn fundamental vision-language concepts, and then adapted to video data in an intermediate video-text pre-training stage to learn video-specific skills such as spatio-temporal reasoning. As a result, our VideoOFA model achieves new state-of-the-art performance on four Video Captioning benchmarks, beating prior art by an average of 9.7 points in CIDEr score. It also outperforms existing models on two open-ended Video Question Answering datasets, showcasing its generalization capability as a universal video-to-text model.
arXiv.org Artificial Intelligence
May-4-2023
- Country:
- North America > United States > Minnesota (0.28)
- Genre:
- Research Report (0.50)
- Industry:
- Education (0.67)
- Technology: