Gupta, Vikash
Fair Evaluation of Federated Learning Algorithms for Automated Breast Density Classification: The Results of the 2022 ACR-NCI-NVIDIA Federated Learning Challenge
Schmidt, Kendall, Bearce, Benjamin, Chang, Ken, Coombs, Laura, Farahani, Keyvan, Elbatele, Marawan, Mouhebe, Kaouther, Marti, Robert, Zhang, Ruipeng, Zhang, Yao, Wang, Yanfeng, Hu, Yaojun, Ying, Haochao, Xu, Yuyang, Testagrose, Conrad, Demirer, Mutlu, Gupta, Vikash, Akünal, Ünal, Bujotzek, Markus, Maier-Hein, Klaus H., Qin, Yi, Li, Xiaomeng, Kalpathy-Cramer, Jayashree, Roth, Holger R.
The correct interpretation of breast density is important in the assessment of breast cancer risk. AI has been shown capable of accurately predicting breast density, however, due to the differences in imaging characteristics across mammography systems, models built using data from one system do not generalize well to other systems. Though federated learning (FL) has emerged as a way to improve the generalizability of AI without the need to share data, the best way to preserve features from all training data during FL is an active area of research. To explore FL methodology, the breast density classification FL challenge was hosted in partnership with the American College of Radiology, Harvard Medical School's Mass General Brigham, University of Colorado, NVIDIA, and the National Institutes of Health National Cancer Institute. Challenge participants were able to submit docker containers capable of implementing FL on three simulated medical facilities, each containing a unique large mammography dataset. The breast density FL challenge ran from June 15 to September 5, 2022, attracting seven finalists from around the world. The winning FL submission reached a linear kappa score of 0.653 on the challenge test data and 0.413 on an external testing dataset, scoring comparably to a model trained on the same data in a central location.
RAISE -- Radiology AI Safety, an End-to-end lifecycle approach
Cardoso, M. Jorge, Moosbauer, Julia, Cook, Tessa S., Erdal, B. Selnur, Genereaux, Brad, Gupta, Vikash, Landman, Bennett A., Lee, Tiarna, Nachev, Parashkev, Somasundaram, Elanchezhian, Summers, Ronald M., Younis, Khaled, Ourselin, Sebastien, Pfister, Franz MJ
The integration of AI into radiology introduces opportunities for improved clinical care provision and efficiency but it demands a meticulous approach to mitigate potential risks as with any other new technology. Beginning with rigorous pre-deployment evaluation and validation, the focus should be on ensuring models meet the highest standards of safety, effectiveness and efficacy for their intended applications. Input and output guardrails implemented during production usage act as an additional layer of protection, identifying and addressing individual failures as they occur. Continuous post-deployment monitoring allows for tracking population-level performance (data drift), fairness, and value delivery over time. Scheduling reviews of post-deployment model performance and educating radiologists about new algorithmic-driven findings is critical for AI to be effective in clinical practice. Recognizing that no single AI solution can provide absolute assurance even when limited to its intended use, the synergistic application of quality assurance at multiple levels - regulatory, clinical, technical, and ethical - is emphasized. Collaborative efforts between stakeholders spanning healthcare systems, industry, academia, and government are imperative to address the multifaceted challenges involved. Trust in AI is an earned privilege, contingent on a broad set of goals, among them transparently demonstrating that the AI adheres to the same rigorous safety, effectiveness and efficacy standards as other established medical technologies. By doing so, developers can instil confidence among providers and patients alike, enabling the responsible scaling of AI and the realization of its potential benefits. The roadmap presented herein aims to expedite the achievement of deployable, reliable, and safe AI in radiology.
Integration and Implementation Strategies for AI Algorithm Deployment with Smart Routing Rules and Workflow Management
Erdal, Barbaros Selnur, Gupta, Vikash, Demirer, Mutlu, Fair, Kim H., White, Richard D., Blair, Jeff, Deichert, Barbara, Lafleur, Laurie, Qin, Ming Melvin, Bericat, David, Genereaux, Brad
This paper reviews the challenges hindering the widespread adoption of artificial intelligence (AI) solutions in the healthcare industry, focusing on computer vision applications for medical imaging, and how interoperability and enterprise-grade scalability can be used to address these challenges. The complex nature of healthcare workflows, intricacies in managing large and secure medical imaging data, and the absence of standardized frameworks for AI development pose significant barriers and require a new paradigm to address them. The role of interoperability is examined in this paper as a crucial factor in connecting disparate applications within healthcare workflows. Standards such as DICOM, Health Level 7 (HL7), and Integrating the Healthcare Enterprise (IHE) are highlighted as foundational for common imaging workflows. A specific focus is placed on the role of DICOM gateways, with Smart Routing Rules and Workflow Management leading transformational efforts in this area. To drive enterprise scalability, new tools are needed. Project MONAI, established in 2019, is introduced as an initiative aiming to redefine the development of medical AI applications. The MONAI Deploy App SDK, a component of Project MONAI, is identified as a key tool in simplifying the packaging and deployment process, enabling repeatable, scalable, and standardized deployment patterns for AI applications. The abstract underscores the potential impact of successful AI adoption in healthcare, offering physicians both life-saving and time-saving insights and driving efficiencies in radiology department workflows. The collaborative efforts between academia and industry, are emphasized as essential for advancing the adoption of healthcare AI solutions.
Current State of Community-Driven Radiological AI Deployment in Medical Imaging
Gupta, Vikash, Erdal, Barbaros Selnur, Ramirez, Carolina, Floca, Ralf, Jackson, Laurence, Genereaux, Brad, Bryson, Sidney, Bridge, Christopher P, Kleesiek, Jens, Nensa, Felix, Braren, Rickmer, Younis, Khaled, Penzkofer, Tobias, Bucher, Andreas Michael, Qin, Ming Melvin, Bae, Gigon, Lee, Hyeonhoon, Cardoso, M. Jorge, Ourselin, Sebastien, Kerfoot, Eric, Choudhury, Rahul, White, Richard D., Cook, Tessa, Bericat, David, Lungren, Matthew, Haukioja, Risto, Shuaib, Haris
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
Cascading Neural Network Methodology for Artificial Intelligence-Assisted Radiographic Detection and Classification of Lead-Less Implanted Electronic Devices within the Chest
Demirer, Mutlu, White, Richard D., Gupta, Vikash, Sebro, Ronnie A., Erdal, Barbaros S.
Background & Purpose: Chest X-Ray (CXR) use in pre-MRI safety screening for Lead-Less Implanted Electronic Devices (LLIEDs), easily overlooked or misidentified on a frontal view (often only acquired), is common. Although most LLIED types are "MRI conditional": 1. Some are stringently conditional; 2. Different conditional types have specific patient- or device- management requirements; and 3. Particular types are "MRI unsafe". This work focused on developing CXR interpretation-assisting Artificial Intelligence (AI) methodology with: 1. 100% detection for LLIED presence/location; and 2. High classification in LLIED typing. Materials & Methods: Data-mining (03/1993-02/2021) produced an AI Model Development Population (1,100 patients/4,871 images) creating 4,924 LLIED Region-Of-Interests (ROIs) (with image-quality grading) used in Training, Validation, and Testing. For developing the cascading neural network (detection via Faster R-CNN and classification via Inception V3), "ground-truth" CXR annotation (ROI labeling per LLIED), as well as inference display (as Generated Bounding Boxes (GBBs)), relied on a GPU-based graphical user interface. Results: To achieve 100% LLIED detection, probability threshold reduction to 0.00002 was required by Model 1, resulting in increasing GBBs per LLIED-related ROI. Targeting LLIED-type classification following detection of all LLIEDs, Model 2 multi-classified to reach high-performance while decreasing falsely positive GBBs. Despite 24% suboptimal ROI image quality, classification was correct in 98.9% and AUCs for the 9 LLIED-types were 1.00 for 8 and 0.92 for 1. For all misclassification cases: 1. None involved stringently conditional or unsafe LLIEDs; and 2. Most were attributable to suboptimal images. Conclusion: This project successfully developed a LLIED-related AI methodology supporting: 1. 100% detection; and 2. Typically 100% type classification.
Deep Learning-Based Automatic Detection of Poorly Positioned Mammograms to Minimize Patient Return Visits for Repeat Imaging: A Real-World Application
Gupta, Vikash, Taylor, Clayton, Bonnet, Sarah, Prevedello, Luciano M., Hawley, Jeffrey, White, Richard D, Flores, Mona G, Erdal, Barbaros Selnur
Screening mammograms are a routine imaging exam performed to detect breast cancer in its early stages to reduce morbidity and mortality attributed to this disease. In order to maximize the efficacy of breast cancer screening programs, proper mammographic positioning is paramount. Proper positioning ensures adequate visualization of breast tissue and is necessary for effective breast cancer detection. Therefore, breast-imaging radiologists must assess each mammogram for the adequacy of positioning before providing a final interpretation of the examination; this often necessitates return patient visits for additional imaging. In this paper, we propose a deep learning-algorithm method that mimics and automates this decision-making process to identify poorly positioned mammograms. Our objective for this algorithm is to assist mammography technologists in recognizing inadequately positioned mammograms real-time, improve the quality of mammographic positioning and performance, and ultimately reducing repeat visits for patients with initially inadequate imaging. The proposed model showed a true positive rate for detecting correct positioning of 91.35% in the mediolateral oblique view and 95.11% in the craniocaudal view. In addition to these results, we also present an automatically generated report which can aid the mammography technologist in taking corrective measures during the patient visit.