Goto

Collaborating Authors

 Nuclear Medicine


NeurIPS_2024_Touchstone1_0-3

Neural Information Processing Systems

How can we test AI performance? This question seems trivial, but it isn't. Standard benchmarks often have problems such as in-distribution and small-size test sets, oversimplified metrics, unfair comparisons, and short-term outcome pressure. As a consequence, good performance on standard benchmarks does not guarantee success in real-world scenarios. To address these problems, we present Touchstone, a large-scale collaborative segmentation benchmark of 9 types of abdominal organs.


Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning

Neural Information Processing Systems

In the medical multi-modal frameworks, the alignment of cross-modality features presents a significant challenge. However, existing works have learned features that are implicitly aligned from the data, without considering the explicit relationships in the medical context. This data-reliance may lead to low generalization of the learned alignment relationships. In this work, we propose the Eye-gaze Guided Multi-modal Alignment (EGMA) framework to harness eye-gaze data for better alignment of medical visual and textual features. We explore the natural auxiliary role of radiologists' eye-gaze data in aligning medical images and text, and introduce a novel approach by using eye-gaze data, collected synchronously by radiologists during diagnostic evaluations. We conduct downstream tasks of image classification and image-text retrieval on four medical datasets, where EGMA achieved state-of-the-art performance and stronger generalization across different datasets. Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal alignment framework.


Copycats: the many lives of a publicly available medical imaging dataset Amelia Jimรฉnez-Sรกnchez 1

Neural Information Processing Systems

Medical Imaging (MI) datasets are fundamental to artificial intelligence in healthcare. The accuracy, robustness, and fairness of diagnostic algorithms depend on the data (and its quality) used to train and evaluate the models. MI datasets used to be proprietary, but have become increasingly available to the public, including on community-contributed platforms (CCPs) like Kaggle or HuggingFace. While open data is important to enhance the redistribution of data's public value, we find that the current CCP governance model fails to uphold the quality needed and recommended practices for sharing, documenting, and evaluating datasets. In this paper, we conduct an analysis of publicly available machine learning datasets on CCPs, discussing datasets' context, and identifying limitations and gaps in the current CCP landscape. We highlight differences between MI and computer vision datasets, particularly in the potentially harmful downstream effects from poor adoption of recommended dataset management practices. We compare the analyzed datasets across several dimensions, including data sharing, data documentation, and maintenance. We find vague licenses, lack of persistent identifiers and storage, duplicates, and missing metadata, with differences between the platforms. Our research contributes to efforts in responsible data curation and AI algorithms for healthcare.






Integrating Deep Metric Learning with Coreset for Active Learning in 3D Segmentation

Neural Information Processing Systems

Deep learning has seen remarkable advancements in machine learning, yet it often demands extensive annotated data. Tasks like 3D semantic segmentation impose a substantial annotation burden, especially in domains like medicine, where expert annotations drive up the cost. Active learning (AL) holds great potential to alleviate this annotation burden in 3D medical segmentation. The majority of existing AL methods, however, are not tailored to the medical domain. While weakly-supervised methods have been explored to reduce annotation burden, the fusion of AL with weak supervision remains unexplored, despite its potential to significantly reduce annotation costs.


Appendix

Neural Information Processing Systems

Despite initial evidence that explanations might be useful for detecting that a model is reliant on spurious signals [Lapuschkin et al., 2019, Rieger et al., 2020], a different line of work directly counters this evidence. Zimmermann et al. [2021] showed that feature visualizations [Olah et al., 2017] are not more effective than dataset examples at improving a human's understanding of the features that highly activate a DNN's intermediate neuron. Increasing evidence demonstrates that current post hoc explanation approaches might be ineffective for model debugging in practice [Chen et al., 2021, Alqaraawi et al., 2020, Ghassemi et al., 2021, Balagopalan et al., 2022, Poursabzi-Sangdeh et al., 2018, Bolukbasi et al., 2021]. In a promising demonstration, Lapuschkin et al. [2019] apply a clustering procedure to the LRP saliency masks derived from a trained model. In the application, the clusters that emerge are able to separate groups of inputs where, presumably, the model relies on different features for its output decision. This work differs from that in a key way: Lapuschkin et al. [2019] demonstration is to seek understanding of the model behavior and not to perform slice discovery. There is no reason why a low performing cluster should emerge from such clustering procedure. Schioppa et al. [2022] address this problem by forming a low-rank approximation of H They choose D to be around 50 in their experiments.