Goto

Collaborating Authors

MIT CSAIL claims its AI system can predict and classify pulmonary edemas

#artificialintelligence

An MIT Computer Science and Artificial Intelligence Lab (CSAIL) team claims to have developed an AI system that can analyze X-rays to anticipate certain kinds of heart failure. By detecting signs of excess fluid in the lungs, a condition known as pulmonary edema, the researchers say it can quantify heart failure severity on a four-level scale correctly more than half the time. Every year, roughly 12.5% of deaths in the U.S. are caused at least in part by heart failure, according to the U.S. Centers for Disease Control and Prevention. One of acute heart failure's most common warning signs is edema; a patient's exact level of excess fluid often dictates a doctor's course of action. But making these determinations is difficult and requires clinicians to rely on subtle features in X-rays, which can sometimes lead to inconsistent diagnoses and treatment plans.


CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison

arXiv.org Artificial Intelligence

Large, labeled datasets have driven deep learning methods to achieve expert-level performance on a variety of medical imaging tasks. We present CheXpert, a large dataset that contains 224,316 chest radiographs of 65,240 patients. We design a labeler to automatically detect the presence of 14 observations in radiology reports, capturing uncertainties inherent in radiograph interpretation. We investigate different approaches to using the uncertainty labels for training convolutional neural networks that output the probability of these observations given the available frontal and lateral radiographs. On a validation set of 200 chest radiographic studies which were manually annotated by 3 board-certified radiologists, we find that different uncertainty approaches are useful for different pathologies. We then evaluate our best model on a test set composed of 500 chest radiographic studies annotated by a consensus of 5 board-certified radiologists, and compare the performance of our model to that of 3 additional radiologists in the detection of 5 selected pathologies. On Cardiomegaly, Edema, and Pleural Effusion, the model ROC and PR curves lie above all 3 radiologist operating points. We release the dataset to the public as a standard benchmark to evaluate performance of chest radiograph interpretation models. The dataset is freely available at https://stanfordmlgroup.github.io/competitions/chexpert .


Large Scale Automated Reading of Frontal and Lateral Chest X-Rays using Dual Convolutional Neural Networks

arXiv.org Machine Learning

The MIMIC-CXR dataset is (to date) the largest released chest x-ray dataset consisting of 473,064 chest x-rays and 206,574 radiology reports collected from 63,478 patients. We present the results of training and evaluating a collection of deep convolutional neural networks on this dataset to recognize multiple common thorax diseases. To the best of our knowledge, this is the first work that trains CNNs for this task on such a large collection of chest x-ray images, which is over four times the size of the largest previously released chest x-ray corpus (ChestX-Ray14). We describe and evaluate individual CNN models trained on frontal and lateral CXR view types. In addition, we present a novel DualNet architecture that emulates routine clinical practice by simultaneously processing both frontal and lateral CXR images obtained from a radiological exam. Our DualNet architecture shows improved performance in recognizing findings in CXR images when compared to applying separate baseline frontal and lateral classifiers.


Extracting and Learning Fine-Grained Labels from Chest Radiographs

arXiv.org Artificial Intelligence

Chest radiographs are the most common diagnostic exam in emergency rooms and intensive care units today. Recently, a number of researchers have begun working on large chest X-ray datasets to develop deep learning models for recognition of a handful of coarse finding classes such as opacities, masses and nodules. In this paper, we focus on extracting and learning fine-grained labels for chest X-ray images. Specifically we develop a new method of extracting fine-grained labels from radiology reports by combining vocabulary-driven concept extraction with phrasal grouping in dependency parse trees for association of modifiers with findings. A total of 457 fine-grained labels depicting the largest spectrum of findings to date were selected and sufficiently large datasets acquired to train a new deep learning model designed for fine-grained classification. We show results that indicate a highly accurate label extraction process and a reliable learning of fine-grained labels. The resulting network, to our knowledge, is the first to recognize fine-grained descriptions of findings in images covering over nine modifiers including laterality, location, severity, size and appearance.


The Concept of Heart Failure with Machine learning

#artificialintelligence

Every year, nearly one out of eight U.S deaths is caused due to heart failure. One of acute heart failure's most common causes is the presence of excess fluid in the lungs. This condition is known as'pulmonary edema.' A patient's excess fluid level often indicates the doctor's course of action, but such determination requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans. A group of researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can analyse the X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 means healthy to 3 which is very bad.