Goto

Collaborating Authors

 egoclip


31fb284a0aaaad837d2930a610cd5e50-Supplemental-Conference.pdf

Neural Information Processing Systems

In our work, we study the video-language pretraining in a specific yet significant domain - the 1st-person view,which ismotivated bytherelease oftheEgo4D dataset. Thevarying clipfrequencies aremainly dependent on manual narrations that are annotated based on the video scenarios and activities. There have average 13.4 clips per minute of video, maximize to175.8 Fig.6(b)displays the distribution of clip duration. In Figure 1 (c), we present the distribution of narration words length.


EgocentricVideo-LanguagePretraining

Neural Information Processing Systems

As illustrated in Tab. 1, the formerly largest egocentric video dataset EPICKITCHENS-100 [14] focuses on kitchens scenarios and its size is far smaller than those of the 3rd-person pretraining sets WebVid-2M [3] and HowTo100M [10].




Egocentric Video-Language Pretraining

Lin, Kevin Qinghong, Wang, Alex Jinpeng, Soldan, Mattia, Wray, Michael, Yan, Rui, Xu, Eric Zhongcong, Gao, Difei, Tu, Rongcheng, Zhao, Wenzhe, Kong, Weijie, Cai, Chengfei, Wang, Hongfa, Damen, Dima, Ghanem, Bernard, Liu, Wei, Shou, Mike Zheng

arXiv.org Artificial Intelligence

Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text datasets, such as HowTo100M. In this work, we exploit the recently released Ego4D dataset to pioneer Egocentric VLP along three directions. (i) We create EgoClip, a 1st-person video-text pretraining dataset comprising 3.8M clip-text pairs well-chosen from Ego4D, covering a large variety of human daily activities. (ii) We propose a novel pretraining objective, dubbed EgoNCE, which adapts video-text contrastive learning to the egocentric domain by mining egocentric-aware positive and negative samples. (iii) We introduce EgoMCQ, a development benchmark that is close to EgoClip and hence can support effective validation and fast exploration of our design decisions in EgoClip and EgoNCE. Furthermore, we demonstrate strong performance on five egocentric downstream tasks across three datasets: video-text retrieval on EPIC-KITCHENS-100; action recognition on Charades-Ego; natural language query, moment query, and object state change classification on Ego4D challenge benchmarks. The dataset and code are available at https://github.com/showlab/EgoVLP.