Kaviani, Parisa
Evaluating Automated Radiology Report Quality through Fine-Grained Phrasal Grounding of Clinical Findings
Mahmood, Razi, Yan, Pingkun, Reyes, Diego Machado, Wang, Ge, Kalra, Mannudeep K., Kaviani, Parisa, Wu, Joy T., Syeda-Mahmood, Tanveer
While some metrics cover clinical entities and their relations[9, 11], generally Several evaluation metrics have been developed recently to scoring metrics do not explicitly capture the textual mention automatically assess the quality of generative AI reports for differences in the anatomy, laterality and severity. Further, chest radiographs based only on textual information using phrasal grounding of the findings in terms of anatomical localization lexical, semantic, or clinical named entity recognition methods. in images is not exploited in the quality scoring. In this paper, we develop a new method of report quality In this paper, we propose a metric that captures both finegrained evaluation by first extracting fine-grained finding patterns textual descriptions of findings as well as their phrasal capturing the location, laterality, and severity of a large number grounding information in terms of anatomical locations in images. of clinical findings. We then performed phrasal grounding We present results that compare this evaluation metric to localize their associated anatomical regions on chest radiograph to other textual metrics on a gold standard dataset derived images. The textual and visual measures are then combined from MIMIC collection of chest X-rays and validated reports, to rate the quality of the generated reports. We present to show its robustness and sensitivity to factual errors.
Development and Validation of a Dynamic-Template-Constrained Large Language Model for Generating Fully-Structured Radiology Reports
Niu, Chuang, Kaviani, Parisa, Lyu, Qing, Kalra, Mannudeep K., Whitlow, Christopher T., Wang, Ge
Current LLMs for creating fully-structured reports face the challenges of formatting errors, content hallucinations, and privacy leakage issues when uploading data to external servers.We aim to develop an open-source, accurate LLM for creating fully-structured and standardized LCS reports from varying free-text reports across institutions and demonstrate its utility in automatic statistical analysis and individual lung nodule retrieval. With IRB approvals, our retrospective study included 5,442 de-identified LDCT LCS radiology reports from two institutions. We constructed two evaluation datasets by labeling 500 pairs of free-text and fully-structured radiology reports and one large-scale consecutive dataset from January 2021 to December 2023. Two radiologists created a standardized template for recording 27 lung nodule features on LCS. We designed a dynamic-template-constrained decoding method to enhance existing LLMs for creating fully-structured reports from free-text radiology reports. Using consecutive structured reports, we automated descriptive statistical analyses and a nodule retrieval prototype. Our best LLM for creating fully-structured reports achieved high performance on cross-institutional datasets with an F1 score of about 97%, with neither formatting errors nor content hallucinations. Our method consistently improved the best open-source LLMs by up to 10.42%, and outperformed GPT-4o by 17.19%. The automatically derived statistical distributions were consistent with prior findings regarding attenuation, location, size, stability, and Lung-RADS. The retrieval system with structured reports allowed flexible nodule-level search and complex statistical analysis. Our developed software is publicly available for local deployment and further research.