Mahmood, Razi
Evaluating Automated Radiology Report Quality through Fine-Grained Phrasal Grounding of Clinical Findings
Mahmood, Razi, Yan, Pingkun, Reyes, Diego Machado, Wang, Ge, Kalra, Mannudeep K., Kaviani, Parisa, Wu, Joy T., Syeda-Mahmood, Tanveer
While some metrics cover clinical entities and their relations[9, 11], generally Several evaluation metrics have been developed recently to scoring metrics do not explicitly capture the textual mention automatically assess the quality of generative AI reports for differences in the anatomy, laterality and severity. Further, chest radiographs based only on textual information using phrasal grounding of the findings in terms of anatomical localization lexical, semantic, or clinical named entity recognition methods. in images is not exploited in the quality scoring. In this paper, we develop a new method of report quality In this paper, we propose a metric that captures both finegrained evaluation by first extracting fine-grained finding patterns textual descriptions of findings as well as their phrasal capturing the location, laterality, and severity of a large number grounding information in terms of anatomical locations in images. of clinical findings. We then performed phrasal grounding We present results that compare this evaluation metric to localize their associated anatomical regions on chest radiograph to other textual metrics on a gold standard dataset derived images. The textual and visual measures are then combined from MIMIC collection of chest X-rays and validated reports, to rate the quality of the generated reports. We present to show its robustness and sensitivity to factual errors.
Fact-Checking of AI-Generated Reports
Mahmood, Razi, Wang, Ge, Kalra, Mannudeep, Yan, Pingkun
With advances in generative artificial intelligence (AI), it is now possible to produce realistic-looking automated reports for preliminary reads of radiology images. This can expedite clinical workflows, improve accuracy and reduce overall costs. However, it is also well-known that such models often hallucinate, leading to false findings in the generated reports. In this paper, we propose a new method of fact-checking of AI-generated reports using their associated images. Specifically, the developed examiner differentiates real and fake sentences in reports by learning the association between an image and sentences describing real or potentially fake findings. To train such an examiner, we first created a new dataset of fake reports by perturbing the findings in the original ground truth radiology reports associated with images. Text encodings of real and fake sentences drawn from these reports are then paired with image encodings to learn the mapping to real/fake labels. The utility of such an examiner is demonstrated for verifying automatically generated reports by detecting and removing fake sentences. Future generative AI approaches can use the resulting tool to validate their reports leading to a more responsible use of AI in expediting clinical workflows.