MedErr-CT: A Visual Question Answering Benchmark for Identifying and Correcting Errors in CT Reports

Kyung, Sunggu, Park, Hyungbin, Seo, Jinyoung, Sung, Jimin, Kim, Jihyun, Kim, Dongyeong, Jo, Wooyoung, Nam, Yoojin, Park, Sangah, Kwon, Taehee, Lee, Sang Min, Kim, Namkug

arXiv.org Artificial Intelligence 

Computed T omography (CT) plays a crucial role in clinical diagnosis, but the growing demand for CT examinations has raised concerns about diagnostic errors. While Multimodal Large Language Models (MLLMs) demonstrate promising comprehension of medical knowledge, their tendency to produce inaccurate information highlights the need for rigorous validation. However, existing medical visual question answering (VQA) benchmarks primarily focus on simple visual recognition tasks, lacking clinical relevance and failing to assess expert-level knowledge. W e introduce MedErr-CT, a novel benchmark for evaluating medical MLLMs' ability to identify and correct errors in CT reports through a VQA framework. The benchmark includes six error categories--four vision-centric errors (Omission, Insertion, Direction, Size) and two lexical error types (Unit, Typo)--and is organized into three task levels: classification, detection, and correction. Using this benchmark, we quantitatively assess the performance of state-of-the-art 3D medical MLLMs, revealing substantial variation in their capabilities across different error types. Our benchmark contributes to the development of more reliable and clinically applicable MLLMs, ultimately helping reduce diagnostic errors and improve accuracy in clinical practice.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found