Webster, Dale R
MINT: A wrapper to make multi-modal and multi-image AI models interactive
Freyberg, Jan, Roy, Abhijit Guha, Spitz, Terry, Freeman, Beverly, Schaekermann, Mike, Strachan, Patricia, Schnider, Eva, Wong, Renee, Webster, Dale R, Karthikesalingam, Alan, Liu, Yun, Dvijotham, Krishnamurthy, Telang, Umesh
During the diagnostic process, doctors incorporate multimodal information including imaging and the medical history - and similarly medical AI development has increasingly become multimodal. In this paper we tackle a more subtle challenge: doctors take a targeted medical history to obtain only the most pertinent pieces of information; how do we enable AI to do the same? We develop a wrapper method named MINT (Make your model INTeractive) that automatically determines what pieces of information are most valuable at each step, and ask for only the most useful information. We demonstrate the efficacy of MINT wrapping a skin disease prediction model, where multiple images and a set of optional answers to $25$ standard metadata questions (i.e., structured medical history) are used by a multi-modal deep network to provide a differential diagnosis. We show that MINT can identify whether metadata inputs are needed and if so, which question to ask next. We also demonstrate that when collecting multiple images, MINT can identify if an additional image would be beneficial, and if so, which type of image to capture. We showed that MINT reduces the number of metadata and image inputs needed by 82% and 36.2% respectively, while maintaining predictive performance. Using real-world AI dermatology system data, we show that needing fewer inputs can retain users that may otherwise fail to complete the system submission and drop off without a diagnosis. Qualitative examples show MINT can closely mimic the step-by-step decision making process of a clinical workflow and how this is different for straight forward cases versus more difficult, ambiguous cases. Finally we demonstrate how MINT is robust to different underlying multi-model classifiers and can be easily adapted to user requirements without significant model re-training.
Towards Accurate Differential Diagnosis with Large Language Models
McDuff, Daniel, Schaekermann, Mike, Tu, Tao, Palepu, Anil, Wang, Amy, Garrison, Jake, Singhal, Karan, Sharma, Yash, Azizi, Shekoofeh, Kulkarni, Kavita, Hou, Le, Cheng, Yong, Liu, Yun, Mahdavi, S Sara, Prakash, Sushant, Pathak, Anupam, Semturs, Christopher, Patel, Shwetak, Webster, Dale R, Dominowska, Ewa, Gottweis, Juraj, Barral, Joelle, Chou, Katherine, Corrado, Greg S, Matias, Yossi, Sunshine, Jake, Karthikesalingam, Alan, Natarajan, Vivek
An accurate differential diagnosis (DDx) is a cornerstone of medical care, often reached through an iterative process of interpretation that combines clinical history, physical examination, investigations and procedures. Interactive interfaces powered by Large Language Models (LLMs) present new opportunities to both assist and automate aspects of this process. In this study, we introduce an LLM optimized for diagnostic reasoning, and evaluate its ability to generate a DDx alone or as an aid to clinicians. 20 clinicians evaluated 302 challenging, real-world medical cases sourced from the New England Journal of Medicine (NEJM) case reports. Each case report was read by two clinicians, who were randomized to one of two assistive conditions: either assistance from search engines and standard medical resources, or LLM assistance in addition to these tools. All clinicians provided a baseline, unassisted DDx prior to using the respective assistive tools. Our LLM for DDx exhibited standalone performance that exceeded that of unassisted clinicians (top-10 accuracy 59.1% vs 33.6%, [p = 0.04]). Comparing the two assisted study arms, the DDx quality score was higher for clinicians assisted by our LLM (top-10 accuracy 51.7%) compared to clinicians without its assistance (36.1%) (McNemar's Test: 45.7, p < 0.01) and clinicians with search (44.4%) (4.75, p = 0.03). Further, clinicians assisted by our LLM arrived at more comprehensive differential lists than those without its assistance. Our study suggests that our LLM for DDx has potential to improve clinicians' diagnostic reasoning and accuracy in challenging cases, meriting further real-world evaluation for its ability to empower physicians and widen patients' access to specialist-level expertise.
Predicting optical coherence tomography-derived diabetic macular edema grades from fundus photographs using deep learning
Varadarajan, Avinash, Bavishi, Pinal, Raumviboonsuk, Paisan, Chotcomwongse, Peranut, Venugopalan, Subhashini, Narayanaswamy, Arunachalam, Cuadros, Jorge, Kanai, Kuniyoshi, Bresnick, George, Tadarati, Mongkol, Silpa-archa, Sukhum, Limwattanayingyong, Jirawut, Nganthavee, Variya, Ledsam, Joe, Keane, Pearse A, Corrado, Greg S, Peng, Lily, Webster, Dale R
Diabetic eye disease is one of the fastest growing causes of preventable blindness. With the advent of anti-VEGF (vascular endothelial growth factor) therapies, it has become increasingly important to detect center-involved diabetic macular edema. However, center-involved diabetic macular edema is diagnosed using optical coherence tomography (OCT), which is not generally available at screening sites because of cost and workflow constraints. Instead, screening programs rely on the detection of hard exudates as a proxy for DME on color fundus photographs, often resulting in high false positive or false negative calls. To improve the accuracy of DME screening, we trained a deep learning model to use color fundus photographs to predict DME grades derived from OCT exams. Our "OCT-DME" model had an AUC of 0.89 (95% CI: 0.87-0.91), which corresponds to a sensitivity of 85% at a specificity of 80%. In comparison, three retinal specialists had similar sensitivities (82-85%), but only half the specificity (45-50%, p<0.001 for each comparison with model). The positive predictive value (PPV) of the OCT-DME model was 61% (95% CI: 56-66%), approximately double the 36-38% by the retina specialists. In addition, we used saliency and other techniques to examine how the model is making its prediction. The ability of deep learning algorithms to make clinically relevant predictions that generally require sophisticated 3D-imaging equipment from simple 2D images has broad relevance to many other applications in medical imaging.