Collaborating Authors

MCU-Net: A framework towards uncertainty representations for decision support system patient referrals in healthcare contexts Machine Learning

Incorporating a human-in-the-loop system when deploying automated decision support is critical in healthcare contexts to create trust, as well as provide reliable performance on a patient-to-patient basis. Deep learning methods while having high performance, do not allow for this patient-centered approach due to the lack of uncertainty representation. Thus, we present a framework of uncertainty representation evaluated for medical image segmentation, using MCU-Net which combines a U-Net with Monte Carlo Dropout, evaluated with four different uncertainty metrics. The framework augments this by adding a human-in-the-loop aspect based on an uncertainty threshold for automated referral of uncertain cases to a medical professional. We demonstrate that MCU-Net combined with epistemic uncertainty and an uncertainty threshold tuned for this application maximizes automated performance on an individual patient level, yet refers truly uncertain cases. This is a step towards uncertainty representations when deploying machine learning based decision support in healthcare settings.

Uncertainty Reduction for Active Image Clustering via a Hybrid Global-Local Uncertainty Model

AAAI Conferences

We propose a novel combined global/local model for active semi-supervised spectral clustering based on the principle of uncertainty reduction. We iteratively compute the derivative of the eigenvectors produced by spectral decomposition with respect to each item/image, and combine this with local label entropy provided by the current clustering results in order to estimate the uncertainty reduction potential of each item in the dataset. We then generate pairwise queries with respect to the best candidate item and retrieve the needed constraints from the user. We evaluate our method using three different image datasets — faces, leaves and dogs, and consistently demonstrate performance superior to the current state-of-the-art.

Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment


The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy.

Relative Uncertainty Learning for Facial Expression Recognition Supplementary Material

Neural Information Processing Systems

We provide visualization results on MNIST and CIFAR to show our uncertainty learning method also works well on datasets besides facial expression recognition (FER) tasks. We plot the most uncertain images according to RUL learned uncertainty values of CIFAR-10 in Figure 1. We utilize red rectangles to mark images that are misclassified and green rectangles to mark images that are rightly classified. It is shown that 53 out of 104 uncertain images are misclassified by the network which indicates our relative uncertainty learning (RUL) can learn large uncertainty values to the hard images which are more likely to be misclassified. To make a contrast, we display the images with the smallest uncertainty values in Figure 2 and the network gets 100% accuracy on these samples.

Estimating Uncertainty and Interpretability in Deep Learning for Coronavirus (COVID-19) Detection Machine Learning

Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (SARS-CoV-2) infections are a major healthcare challenge around the world. Detecting COVID-19 in X-ray images is crucial for diagnosis, assessment and treatment. However, diagnostic uncertainty in the report is a challenging and yet inevitable task for radiologist. In this paper, we investigate how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and show that the uncertainty in prediction is highly correlates with accuracy of prediction. We believe that the availability of uncertainty-aware deep learning solution will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.