cogme
Toward a Human-Level Video Understanding Intelligence
Heo, Yu-Jung, Lee, Minsu, Choi, Seongho, Choi, Woo Suk, Shin, Minjung, Jung, Minjoon, Ryu, Jeh-Kwang, Zhang, Byoung-Tak
We aim to develop an AI agent that can watch video clips and have a conversation with human about the video story. Developing video understanding intelligence is a significantly challenging task, and evaluation methods for adequately measuring and analyzing the progress of AI agent are lacking as well. In this paper, we propose the Video Turing Test to provide effective and practical assessments of video understanding intelligence as well as human-likeness evaluation of AI agents. We define a general format and procedure of the Video Turing Test and present a case study to confirm the effectiveness and usefulness of the proposed test.
- Asia > South Korea > Seoul > Seoul (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Vision > Video Understanding (0.87)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.81)
- Information Technology > Artificial Intelligence > Issues > Turing's Test (0.61)
CogME: A Novel Evaluation Metric for Video Understanding Intelligence
Shin, Minjung, Kim, Jeonghoon, Choi, Seongho, Heo, Yu-Jung, Kim, Donghyun, Lee, Minsu, Zhang, Byoung-Tak, Ryu, Jeh-Kwang
Developing video understanding intelligence is quite challenging because it requires holistic integration of images, scripts, and sounds based on natural language processing, temporal dependency, and reasoning. Recently, substantial attempts have been made on several video datasets with associated question answering (QA) on a large scale. However, existing evaluation metrics for video question answering (VideoQA) do not provide meaningful analysis. To make progress, we argue that a well-made framework, established on the way humans understand, is required to explain and evaluate the performance of understanding in detail. Then we propose a top-down evaluation system for VideoQA, based on the cognitive process of humans and story elements: Cognitive Modules for Evaluation (CogME). CogME is composed of three cognitive modules: targets, contents, and thinking. The interaction among the modules in the understanding procedure can be expressed in one sentence as follows: "I understand the CONTENT of the TARGET through a way of THINKING." Each module has sub-components derived from the story elements. We can specify the required aspects of understanding by annotating the sub-components to individual questions. CogME thus provides a framework for an elaborated specification of VideoQA datasets. To examine the suitability of a VideoQA dataset for validating video understanding intelligence, we evaluated the baseline model of the DramaQA dataset by applying CogME. The evaluation reveals that story elements are unevenly reflected in the existing dataset, and the model based on the dataset may cause biased predictions. Although this study has only been able to grasp a narrow range of stories, we expect that it offers the first step in considering the cognitive process of humans on the video understanding intelligence of humans and AI.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > South Korea > Seoul > Seoul (0.05)
- Europe > Germany > Berlin (0.04)
- Media (0.46)
- Leisure & Entertainment (0.46)