flowvqa
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts
Singh, Shubhankar, Chaurasia, Purvi, Varun, Yerram, Pandya, Pranshu, Gupta, Vatsal, Gupta, Vivek, Roth, Dan
Existing benchmarks for visual question answering lack in visual grounding and complexity, particularly in evaluating spatial reasoning skills. We introduce FlowVQA, a novel benchmark aimed at assessing the capabilities of visual question-answering multimodal language models in reasoning with flowcharts as visual contexts. FlowVQA comprises 2,272 carefully generated and human-verified flowchart images from three distinct content sources, along with 22,413 diverse question-answer pairs, to test a spectrum of reasoning tasks, including information localization, decision-making, and logical progression. We conduct a thorough baseline evaluation on a suite of both open-source and proprietary multimodal language models using various strategies, followed by an analysis of directional bias. The results underscore the benchmark's potential as a vital tool for advancing the field of multimodal modeling, providing a focused and challenging environment for enhancing model performance in visual and logical reasoning tasks.
- North America > United States > California > Santa Clara County > San Jose (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (4 more...)
- Workflow (1.00)
- Research Report > New Finding (0.87)