Goto

Collaborating Authors

 medical imaging classification


Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization

Neural Information Processing Systems

Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which is often unfeasible in clinically realistic environments. When trained on limited datasets, the deep neural network is lack of generalization capability, as the trained deep neural network on data within a certain distribution (e.g. the data captured by a certain device vendor or patient population) may not be able to generalize to the data with another distribution. In this paper, we introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification. Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding with a novel linear-dependency regularization term to capture the shareable information among medical data collected from different domains. As a result, the trained neural network is expected to equip with better generalization capability to the ``unseen medical data. Experimental results on two challenging medical imaging classification tasks indicate that our method can achieve better cross-domain generalization capability compared with state-of-the-art baselines.


Sm: enhanced localization in Multiple Instance Learning for medical imaging classification

Neural Information Processing Systems

Multiple Instance Learning (MIL) is widely used in medical imaging classification to reduce the labeling effort. While only bag labels are available for training, one typically seeks predictions at both bag and instance levels (classification and localization tasks, respectively). Early MIL methods treated the instances in a bag independently. Recent methods account for global and local dependencies among instances. Although they have yielded excellent results in classification, their performance in terms of localization is comparatively limited.


Review for NeurIPS paper: Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization

Neural Information Processing Systems

Weaknesses: The linear combination of multi-source domain has been investigated in [a,b], which shoule be discussed. Multi-source Domain Adaptation for Face Recognition Multi-Source Domain Adaptation: A Causal View Using KL divergence or GAN to align many source domains is a quite common operation. The method seems like a simple combination of [37] and KL-divergence based alignment. Is the other divergence measure can be applied for this framework? The abstract claims that the medical image are usually heterogeneous, but the tested datasets are homogeneous.


Review for NeurIPS paper: Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization

Neural Information Processing Systems

Four knowledgeable reviewers have mixed opinions: R1 and R4 -- 'Marginally above the acceptance threshold', R3 -- 'Marginally above the acceptance threshold', and R2 -- 'Reject'. R2, who is the least confident among 4 reviewers, has a main concern about comparing with two works [a] Multi-source Domain Adaptation for Face Recognition and [b] Multi-Source Domain Adaptation: A Causal View. I read the paper and rebuttal and decide to downgrade this concern because the paper is mainly about domain generalization (DG), rather than domain adaptation (DA). There are quite differences between them and hence it is not fully necessary to compare the proposed DG approach with DA approaches thoroughly. Also, the authors agree to cite these works in future version, which to me is fine.


Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization

Neural Information Processing Systems

Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which is often unfeasible in clinically realistic environments. When trained on limited datasets, the deep neural network is lack of generalization capability, as the trained deep neural network on data within a certain distribution (e.g. the data captured by a certain device vendor or patient population) may not be able to generalize to the data with another distribution. In this paper, we introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification. Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding with a novel linear-dependency regularization term to capture the shareable information among medical data collected from different domains.


A conformalized learning of a prediction set with applications to medical imaging classification

Hirsch, Roy, Goldberger, Jacob

arXiv.org Artificial Intelligence

Medical imaging classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, which prevents their deployment in medical clinics. We present an algorithm that can modify any classifier to produce a prediction set containing the true label with a user-specified probability, such as 90%. We train a network to predict an instance-based version of the Conformal Prediction threshold. The threshold is then conformalized to ensure the required coverage. We applied the proposed algorithm to several standard medical imaging classification datasets. The experimental results demonstrate that our method outperforms current approaches in terms of smaller average size of the prediction set while maintaining the desired coverage.