Reading the Videos: Temporal Labeling for Crowdsourced Time-Sync Videos Based on Semantic Embedding
Lv, Guangyi (University of Science and Technology of China) | Xu, Tong (University of Science and Technology of China) | Chen, Enhong (University of Science and Technology of China) | Liu, Qi (University of Science and Technology of China) | Zheng, Yi (Ant Financial Services Group)
Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called "bullet-screen comments", video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.
Apr-19-2016