Collaborating Authors

X-ray Dissectography Enables Stereotography to Improve Diagnostic Performance Artificial Intelligence

X-ray imaging is the most popular medical imaging technology. While x-ray radiography is rather cost-effective, tissue structures are superimposed along the x-ray paths. On the other hand, computed tomography (CT) reconstructs internal structures but CT increases radiation dose, is complicated and expensive. Here we propose "x-ray dissectography" to extract a target organ/tissue digitally from few radiographic projections for stereographic and tomographic analysis in the deep learning framework. As an exemplary embodiment, we propose a general X-ray dissectography network, a dedicated X-ray stereotography network, and the X-ray imaging systems to implement these functionalities. Our experiments show that x-ray stereography can be achieved of an isolated organ such as the lungs in this case, suggesting the feasibility of transforming conventional radiographic reading to the stereographic examination of the isolated organ, which potentially allows higher sensitivity and specificity, and even tomographic visualization of the target. With further improvements, x-ray dissectography promises to be a new x-ray imaging modality for CT-grade diagnosis at radiation dose and system cost comparable to that of radiographic or tomosynthetic imaging.

CoroNet: A Deep Neural Network for Detection and Diagnosis of Covid-19 from Chest X-ray Images Machine Learning

The novel Coronavirus also called Covid-19 originated in Wuhan, China in December 2019 and has now spread across the world. It has so far infected around 1.8 million people and claimed approximately 114698 lives overall. As the number of cases are rapidly increasing, most of the countries are facing shortage of testing kits and resources. The limited quantity of testing kits and increasing number of daily cases encouraged us to come up with a Deep Learning model that can aid radiologists and clinicians in detecting Covid-19 cases using chest X-rays. Therefore, in this study, we propose CoroNet, a Deep Convolutional Neural Network model to automatically detect Covid-19 infection from chest X-ray images. The deep model called CoroNet has been trained and tested on a dataset prepared by collecting Covid-19 and other chest pneumonia X-ray images from two different publically available databases. The experimental results show that our proposed model achieved an overall accuracy of 89.5%, and more importantly the precision and recall rate for Covid-19 cases are 97% and 100%. The preliminary results of this study look promising which can be further improved as more training data becomes available. Overall, the proposed model substantially advances the current radiology based methodology and during Covid-19 pandemic, it can be very helpful tool for clinical practitioners and radiologists to aid them in diagnosis, quantification and follow-up of Covid-19 cases.

Quantifying Pulmonary Edema on Chest Radiographs


See article by Horng et al in this issue. William F. Auffermann, MD, PhD, is an associate professor of radiology and imaging sciences at the University of Utah School of Medicine. Dr Auffermann is a cardiothoracic radiologist and is ABPM board certified in clinical informatics. His research interests include imaging informatics, clinical informatics, applications of AI in radiology, medical image perception, and perceptual training. Recent research projects include image annotation for AI using eye tracking, human factors engineering, and developing simulation-based perceptual training methods to facilitate radiology education.

Robust Classification from Noisy Labels: Integrating Additional Knowledge for Chest Radiography Abnormality Assessment Artificial Intelligence

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.

Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment


The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy.