Collaborating Authors


RSNA Launches Pulmonary Embolism AI Challenge


The Radiological Society of North America (RSNA) has launched its fourth annual artificial intelligence (AI) challenge, a competition among researchers to create applications that perform a clearly defined clinical task according to specified performance measures. The challenge for competitors this year is to create machine-learning algorithms to detect and characterize instances of pulmonary embolism. RSNA collaborated with the Society of Thoracic Radiology (STR) to create a massive dataset for the challenge. The RSNA-STR Pulmonary Embolism CT (RSPECT) dataset is comprised of more than 12,000 CT scans collected from five international research centers. The dataset was labeled with detailed clinical annotations by a group of more than 80 expert thoracic radiologists.

Handling of uncertainty in medical data using machine learning and probability theory techniques: A review of 30 years (1991-2020) Artificial Intelligence

Understanding data and reaching valid conclusions are of paramount importance in the present era of big data. Machine learning and probability theory methods have widespread application for this purpose in different fields. One critically important yet less explored aspect is how data and model uncertainties are captured and analyzed. Proper quantification of uncertainty provides valuable information for optimal decision making. This paper reviewed related studies conducted in the last 30 years (from 1991 to 2020) in handling uncertainties in medical data using probability theory and machine learning techniques. Medical data is more prone to uncertainty due to the presence of noise in the data. So, it is very important to have clean medical data without any noise to get accurate diagnosis. The sources of noise in the medical data need to be known to address this issue. Based on the medical data obtained by the physician, diagnosis of disease, and treatment plan are prescribed. Hence, the uncertainty is growing in healthcare and there is limited knowledge to address these problems. We have little knowledge about the optimal treatment methods as there are many sources of uncertainty in medical science. Our findings indicate that there are few challenges to be addressed in handling the uncertainty in medical raw data and new models. In this work, we have summarized various methods employed to overcome this problem. Nowadays, application of novel deep learning techniques to deal such uncertainties have significantly increased.

Intelligent Pneumonia Identification from Chest X-Rays: A Systematic Literature Review


Chest radiography is an important diagnostic tool for chest-related diseases. Medical imaging research is currently embracing the automatic detection techniques used in computer vision. Over the past decade, Deep Learning techniques have shown an enormous breakthrough in the field of medical diagnostics. Various automated systems have been proposed for the rapid detection of pneumonia on chest x-rays images Although such detection algorithms are many and varied, they have not been summarized into a review that would assist practitioners in selecting the best methods from a real-time perspective, perceiving the available datasets, and understanding the currently achieved results in this domain. After summarizing the topic, the review analyzes the usability, goodness factors, and computational complexities of the algorithms that implement these techniques.

Local Adaptation Improves Accuracy of Deep Learning Model for Automated X-Ray Thoracic Disease Detection : A Thai Study Machine Learning

In modern clinical practice, chest X-ray, or CXR, is one of the most widely used methods in diagnosing abnormal conditions in the chest and nearby structures due to its noninvasive nature, low cost and high availability. Chest radiograph images can reveal abnormalities inside the lung parenchyma, mediastinum, rib cage, and heart, allowing physicians to determine the causes of various illnesses and monitor treatments for a variety of life-threatening diseases, such as pneumonia, tuberculosis, and cancer. In Western societies, CXR is performed 236 times on average on 1,000 patients, which accounts for 25% of all diagnostic imaging procedures [1]. In Thailand and Southeast Asian neighboring countries, thoracic diseases constitute the leading causes of deaths for all age groups. Among the ten leading causes of deaths in SEA [2], five are thoracic abnormalities -- Tuberculosis, lung cancer, Pneumonia, heart disease, and chronic obstructive pulmonary disease (COPD) -- all of which rely on chest X-ray imaging as the primary method of diagnosis. As the region has the highest rate of Tuberculosis (TB) according to a World Health Organization (WHO) report [3] and accounts for 44% of TB cases worldwide, CXR has become prevalent in routine examinations and pre-employment screenings region-wide. However, the analysis of chest radiographs requires years of training and experience.

Generative-based Airway and Vessel Morphology Quantification on Chest CT Images Machine Learning

Accurately and precisely characterizing the morphology of small pulmonary structures from Computed Tomography (CT) images, such as airways and vessels, is becoming of great importance for diagnosis of pulmonary diseases. The smaller conducting airways are the major site of increased airflow resistance in chronic obstructive pulmonary disease (COPD), while accurately sizing vessels can help identify arterial and venous changes in lung regions that may determine future disorders. However, traditional methods are often limited due to image resolution and artifacts. We propose a Convolutional Neural Regressor (CNR) that provides cross-sectional measurement of airway lumen, airway wall thickness, and vessel radius. CNR is trained with data created by a generative model of synthetic structures which is used in combination with Simulated and Unsupervised Generative Adversarial Network (SimGAN) to create simulated and refined airways and vessels with known ground-truth. For validation, we first use synthetically generated airways and vessels produced by the proposed generative model to compute the relative error and directly evaluate the accuracy of CNR in comparison with traditional methods. Then, in-vivo validation is performed by analyzing the association between the percentage of the predicted forced expiratory volume in one second (FEV1\%) and the value of the Pi10 parameter, two well-known measures of lung function and airway disease, for airways. For vessels, we assess the correlation between our estimate of the small-vessel blood volume and the lungs' diffusing capacity for carbon monoxide (DLCO). The results demonstrate that Convolutional Neural Networks (CNNs) provide a promising direction for accurately measuring vessels and airways on chest CT images with physiological correlates.

Automatic lung segmentation in routine imaging is a data diversity problem, not a methodology problem Machine Learning

--Automated segmentation of anatomical structures is a crucial step in many medical image analysis tasks. For lung segmentation, a variety of approaches exist, involving sophisticated pipelines trained and validated on a range of different data sets. However, during translation to clinical routine the applicability of these approaches across diseases remains limited. Here, we show that the accuracy and reliability of lung segmentation algorithms on demanding cases primarily does not depend on methodology, but on the diversity of training data. We compare 4 generic deep learning approaches and 2 published lung segmentation algorithms on routine imaging data with more than 6 different disease patterns and 3 published data sets. We show that a basic approach - U-net - performs either better, or competitively with other approaches on both routine data and published data sets, and outperforms published approaches once trained on a diverse data set covering multiple diseases. Training data composition consistently has a bigger impact than algorithm choice on accuracy across test data sets. We carefully analyse the impact of data diversity, and the specifications of annotations on both training and validation sets to provide a reference for algorithms, training data, and annotation. Results on a seemingly well understood task of lung segmentation suggest the critical importance of training data diversity compared to model choice. The translation of machine-learning approaches developed on specific data sets to the variety of routine clinical data is of increasing importance. As methodology matures across different fields, means to render algorithms robust for the transition from the lab to the clinic become critical.

Machine Learning in Quantitative PET Imaging Machine Learning

Recent years have witnessed the trend that machine learning, especially deep learning, is being increasingly used in the application of PET imaging. Various t ypes of machine learning networks have been borrowed from computer vision field and adapted to speci fic clinical tasks for PET quantitative imaging. As reviewed in this paper, the most common applicat ions are PET AC and low-count PET reconstruction. It is also an emerging field since all of thes e reviewed studies were published within five years. With the development in both machine learning alg orithm and computing hardware, more learning-based methods are expected to facilitate the clin ical workflow of PET imaging with more potential quantification application. In addition to PET AC and low-count reconstruction, there ar e other topics in PET imaging where machine learning can be exploited. For example, high resolu tion PET has great potential in visualizing and accurately measuring the radiotracer concentrat ion in structures with dimensions of millimeter, while it is subject to the partial volume e ffect due to the limited spatial discriminating ability of scanner.[

Training Deep Learning models with small datasets Machine Learning

Miguel Romero BSc 1, Yannet Interian PhD 1, Timothy Solberg PhD 2, and Gilmer Valdes PhD 2 1 Master of Science in Data Science, University of San Francisco, San Francisco, CA 2 Department of Radiation Oncology, University of California San Francisco, San Francisco, CA December 17, 2019 Abstract The growing use of Machine Learning has produced significant advances in many fields. For image-based tasks, however, the use of deep learning remains challenging in small datasets. In this article, we review, evaluate and compare current state of the art techniques in training neural networks to elucidate which techniques work best for small datasets. We further propose a path forward for the improvement of model accuracy in medical imaging applications. We observed best results from: one cycle training, discriminative learning rates with gradual freezing and parameter modification after transfer learning. We also established that when datasets are small, transfer learning plays an important role beyond parameter initialization by reusing previously learned features. Surprisingly we observed that there is little advantage in using pre-trained networks in images from another part of the body compared to Imagenet. On the contrary, if images from the same part of the body are available then transfer learning can produce a significant improvement in performance with as little as 50 images in the training data. 1 Introduction The use of machine learning in medical imaging, radiation theranostics and medical physics applications has created tremendous opportunity with research that encompasses: quality assurance [1, 2, 3, 4, 5, 6], outcome prediction [7, 8, 9, 10, 11, 12, 13], segmentation [14, 15, 16, 17] or dosimetric prediction Equal contribution authors. Partially supported by the wicklow AI and medical research initiative at the Data institute.

Pi-PE: A Pipeline for Pulmonary Embolism Detection using Sparsely Annotated 3D CT Images Machine Learning

Pulmonary embolisms (PE) are known to be one of the leading causes for cardiac-related mortality. Due to inherent variabilities in how PE manifests and the cumbersome nature of manual diagnosis, there is growing interest in leveraging AI tools for detecting PE. In this paper, we build a two-stage detection pipeline that is accurate, computationally efficient, robust to variations in PE types and kernels used for CT reconstruction, and most importantly, does not require dense annotations. Given the challenges in acquiring expert annotations in large-scale datasets, our approach produces state-of-the-art results with very sparse emboli contours (at 10mm slice spacing), while using models with significantly lower number of parameters. We achieve AUC scores of 0.94 on the validation set and 0.85 on the test set of highly severe PEs. Using a large, real-world dataset characterized by complex PE types and patients from multiple hospitals, we present an elaborate empirical study and provide guidelines for designing highly generalizable pipelines.

Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists


We developed CheXNeXt, a convolutional neural network to concurrently detect the presence of 14 different pathologies, including pneumonia, pleural effusion, pulmonary masses, and nodules in frontal-view chest radiographs. CheXNeXt was trained and internally validated on the ChestX-ray8 dataset, with a held-out validation set consisting of 420 images, sampled to contain at least 50 cases of each of the original pathology labels. On this validation set, the majority vote of a panel of 3 board-certified cardiothoracic specialist radiologists served as reference standard. We compared CheXNeXt's discriminative performance on the validation set to the performance of 9 radiologists using the area under the receiver operating characteristic curve (AUC). The radiologists included 6 board-certified radiologists (average experience 12 years, range 4–28 years) and 3 senior radiology residents, from 3 academic institutions.