Evaluating Visual Reasoning through Grounded Language Understanding

Suhr, Alane (Cornell University) | Lewis, Mike (Facebook) | Yeh, James (Cornell University) | Artzi, Yoav

AI Magazine 

Autonomous systems that understand natural language must reason about complex language and visual observations. Key to making progress towards such systems is the availability of benchmark datasets and tasks. We introduce the Cornell Natural Language Visual Reasoning (NLVR) corpus, which targets reasoning skills like counting, comparisons, and set theory. NLVR contains 92,244 examples of natural language statements paired with synthetic images and annotated with boolean values for the simple task of determining whether the sentence is true or false about the image. While it presents a simple task, NLVR has been developed to challenge systems with diverse linguistic phenomena and complex reasoning. Linguistic analysis confirms that NLVR presents diversity and complexity beyond what is provided by contemporary benchmarks. Empirical evaluation of several methods further demonstrates the open challenges NLVR presents.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found