Krishnan, Amita
LEARNER: Learning Granular Labels from Coarse Labels using Contrastive Learning
Gare, Gautam, Armouti, Jana, Madaan, Nikhil, Panda, Rohan, Fox, Tom, Hutchins, Laura, Krishnan, Amita, Rodriguez, Ricardo, DeBoisblanc, Bennett, Ramanan, Deva, Galeotti, John
A crucial question in active patient care is determining if a treatment is having the desired effect, especially when changes are subtle over short periods. We propose using inter-patient data to train models that can learn to detect these fine-grained changes within a single patient. Specifically, can a model trained on multi-patient scans predict subtle changes in an individual patient's scans? Recent years have seen increasing use of deep learning (DL) in predicting diseases using biomedical imaging, such as predicting COVID-19 severity using lung ultrasound (LUS) data. While extensive literature exists on successful applications of DL systems when well-annotated large-scale datasets are available, it is quite difficult to collect a large corpus of personalized datasets for an individual. In this work, we investigate the ability of recent computer vision models to learn fine-grained differences while being trained on data showing larger differences. We evaluate on an in-house LUS dataset and a public ADNI brain MRI dataset. We find that models pre-trained on clips from multiple patients can better predict fine-grained differences in scans from a single patient by employing contrastive learning.
Improving Model's Interpretability and Reliability using Biomarkers
Gare, Gautam Rajendrakumar, Fox, Tom, Chansangavej, Beam, Krishnan, Amita, Rodriguez, Ricardo Luis, deBoisblanc, Bennett P, Ramanan, Deva Kannan, Galeotti, John Michael
Accurate and interpretable diagnostic models are crucial in the safety-critical field of medicine. We investigate the interpretability of our proposed biomarker-based lung ultrasound diagnostic pipeline to enhance clinicians' diagnostic capabilities. The objective of this study is to assess whether explanations from a decision tree classifier, utilizing biomarkers, can improve users' ability to identify inaccurate model predictions compared to conventional saliency maps. Our findings demonstrate that decision tree explanations, based on clinically established biomarkers, can assist clinicians in detecting false positives, thus improving the reliability of diagnostic models in medicine.