Collaborating Authors


AI recognition of patient race in medical imaging: a modelling study


Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race.

Natural Language Processing for Smart Healthcare Artificial Intelligence

Smart healthcare has achieved significant progress in recent years. Emerging artificial intelligence (AI) technologies enable various smart applications across various healthcare scenarios. As an essential technology powered by AI, natural language processing (NLP) plays a key role in smart healthcare due to its capability of analysing and understanding human language. In this work we review existing studies that concern NLP for smart healthcare from the perspectives of technique and application. We focus on feature extraction and modelling for various NLP tasks encountered in smart healthcare from a technical point of view. In the context of smart healthcare applications employing NLP techniques, the elaboration largely attends to representative smart healthcare scenarios, including clinical practice, hospital management, personal care, public health, and drug development. We further discuss the limitations of current works and identify the directions for future works.

A Review of Generative Adversarial Networks in Cancer Imaging: New Applications, New Solutions Artificial Intelligence

Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include high inter-observer variability, difficulty of small-sized lesion detection, nodule interpretation and malignancy determination, inter- and intra-tumour heterogeneity, class imbalance, segmentation inaccuracies, and treatment effect uncertainty. The recent advancements in Generative Adversarial Networks (GANs) in computer vision as well as in medical imaging may provide a basis for enhanced capabilities in cancer detection and analysis. In this review, we assess the potential of GANs to address a number of key challenges of cancer imaging, including data scarcity and imbalance, domain and dataset shifts, data access and privacy, data annotation and quantification, as well as cancer detection, tumour profiling and treatment planning. We provide a critical appraisal of the existing literature of GANs applied to cancer imagery, together with suggestions on future research directions to address these challenges. We analyse and discuss 163 papers that apply adversarial training techniques in the context of cancer imaging and elaborate their methodologies, advantages and limitations. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on GANs in the artificial intelligence community.

Unsupervised machine learning application in ACL


De-Sheng Chen,* Tong-Fu Wang,* Jia-Wang Zhu, Bo Zhu, Zeng-Liang Wang, Jian-Gang Cao, Cai-Hong Feng, Jun-Wei Zhao Department of Sports Medicine and Arthroscopy, Tianjin Hospital of Tianjin University, Tianjin, People's Republic of China *These authors contributed equally to this work Correspondence: Jia-Wang Zhu Department of Sports Medicine and Arthroscopy, Tianjin Hospital of Tianjin University, Tianjin, People's Republic of China Email [email protected] Purpose: We aim to present an unsupervised machine learning application in anterior cruciate ligament (ACL) rupture and evaluate whether supervised machine learning-derived radiomics features enable prediction of ACL rupture accurately. Patients and Methods: Sixty-eight patients were reviewed. Their demographic features were recorded, radiomics features were extracted, and the input dataset was defined as a collection of demographic features and radiomics features. The input dataset was automatically classified by the unsupervised machine learning algorithm. Then, we used a supervised machine learning algorithm to construct a radiomics model.

A Comprehensive Review of Computer-aided Whole-slide Image Analysis: from Datasets to Feature Extraction, Segmentation, Classification, and Detection Approaches Artificial Intelligence

With the development of computer-aided diagnosis (CAD) and image scanning technology, Whole-slide Image (WSI) scanners are widely used in the field of pathological diagnosis. Therefore, WSI analysis has become the key to modern digital pathology. Since 2004, WSI has been used more and more in CAD. Since machine vision methods are usually based on semi-automatic or fully automatic computers, they are highly efficient and labor-saving. The combination of WSI and CAD technologies for segmentation, classification, and detection helps histopathologists obtain more stable and quantitative analysis results, save labor costs and improve diagnosis objectivity. This paper reviews the methods of WSI analysis based on machine learning. Firstly, the development status of WSI and CAD methods are introduced. Secondly, we discuss publicly available WSI datasets and evaluation metrics for segmentation, classification, and detection tasks. Then, the latest development of machine learning in WSI segmentation, classification, and detection are reviewed continuously. Finally, the existing methods are studied, the applicabilities of the analysis methods are analyzed, and the application prospects of the analysis methods in this field are forecasted.

Chest X-Rays with Artificial Intelligence Catches More Lung Cancer


Lung cancer detection and radiologist performance can get a boost from an artificial intelligence (AI) algorithm that pinpoints previously un-detected cancers on chest X-rays. In a study published in the Dec. 10 Radiology: Cardiothoracic Imaging, investigators from Seoul National University Hospital outlined how a commercially available deep-learning algorithm outperformed four thoracic radiologists on both first and second reads. Overall, said the team led by Ju Gang Nam, M.D., the algorithm offered both higher sensitivity and higher specificity, and it improved providers' performance as a seconder reader, leading to significantly improved detection rates. But, to date, the team said, adoption of computer-aided detection with chest X-ray has been slow because many providers still have lingering questions about whether it can perform well enough in clinical practice. To answer that question, Nam's team used an enriched dataset of 50 normal chest X-rays, as well as 168 posteroanterior chest X-rays with lung cancers.

Local Adaptation Improves Accuracy of Deep Learning Model for Automated X-Ray Thoracic Disease Detection : A Thai Study Machine Learning

In modern clinical practice, chest X-ray, or CXR, is one of the most widely used methods in diagnosing abnormal conditions in the chest and nearby structures due to its noninvasive nature, low cost and high availability. Chest radiograph images can reveal abnormalities inside the lung parenchyma, mediastinum, rib cage, and heart, allowing physicians to determine the causes of various illnesses and monitor treatments for a variety of life-threatening diseases, such as pneumonia, tuberculosis, and cancer. In Western societies, CXR is performed 236 times on average on 1,000 patients, which accounts for 25% of all diagnostic imaging procedures [1]. In Thailand and Southeast Asian neighboring countries, thoracic diseases constitute the leading causes of deaths for all age groups. Among the ten leading causes of deaths in SEA [2], five are thoracic abnormalities -- Tuberculosis, lung cancer, Pneumonia, heart disease, and chronic obstructive pulmonary disease (COPD) -- all of which rely on chest X-ray imaging as the primary method of diagnosis. As the region has the highest rate of Tuberculosis (TB) according to a World Health Organization (WHO) report [3] and accounts for 44% of TB cases worldwide, CXR has become prevalent in routine examinations and pre-employment screenings region-wide. However, the analysis of chest radiographs requires years of training and experience.

Volumetric Lung Nodule Segmentation using Adaptive ROI with Multi-View Residual Learning Machine Learning

Accurate quantification of pulmonary nodules can greatly assist the early diagnosis of lung cancer, which can enhance patient survival possibilities. A number of nodule segmentation techniques have been proposed, however, all of the existing techniques rely on radiologist 3-D volume of interest (VOI) input or use the constant region of interest (ROI) and only investigate the presence of nodule voxels within the given VOI. Such approaches restrain the solutions to investigate the nodule presence outside the given VOI and also include the redundant structures into VOI, which may lead to inaccurate nodule segmentation. In this work, a novel semi-automated approach for 3-D segmentation of nodule in volumetric computerized tomography (CT) lung scans has been proposed. The proposed technique can be segregated into two stages, at the first stage, it takes a 2-D ROI containing the nodule as input and it performs patch-wise investigation along the axial axis with a novel adaptive ROI strategy. The adaptive ROI algorithm enables the solution to dynamically select the ROI for the surrounding slices to investigate the presence of nodule using deep residual U-Net architecture. The first stage provides the initial estimation of nodule which is further utilized to extract the VOI. At the second stage, the extracted VOI is further investigated along the coronal and sagittal axis with two different networks and finally, all the estimated masks are fed into the consensus module to produce the final volumetric segmentation of nodule. The proposed approach has been rigorously evaluated on the LIDC dataset, which is the largest publicly available dataset. The result suggests that the approach is significantly robust and accurate as compared to the previous state of the art techniques.

Deep learning helps radiologists detect lung cancer on chest X-rays – Physics World


Chest radiography is the most common imaging exam used for lung cancer screening. However, the size, density and location of lung lesions make their detection on chest X-rays challenging. Recently, machine-learning methods have been developed to help improve diagnostic accuracy, with deep convolutional neural networks (DCNNs), showing promise for chest radiograph interpretation. A study from four medical centres on three continents has now demonstrated that DCNN software can improve radiologists' detection of malignant lung cancers on chest X-rays (Radiology 10.1148/radiol.2019182465). "The average sensitivity of radiologists was improved by 5.2% when they re-reviewed X-rays with the deep-learning software," says Byoung Wook Choi from Yonsei University College of Medicine in Seoul, Korea.

Deep Convolutional Neural Network–based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs


Multicenter studies are required to validate the added benefit of using deep convolutional neural network (DCNN) software for detecting malignant pulmonary nodules on chest radiographs. To compare the performance of radiologists in detecting malignant pulmonary nodules on chest radiographs when assisted by deep learning–based DCNN software with that of radiologists or DCNN software alone in a multicenter setting. Investigators at four medical centers retrospectively identified 600 lung cancer–containing chest radiographs and 200 normal chest radiographs. Each radiograph with a lung cancer had at least one malignant nodule confirmed by CT and pathologic examination. Twelve radiologists from the four centers independently analyzed the chest radiographs and marked regions of interest. Commercially available deep learning–based computer-aided detection software separately trained, tested, and validated with 19 330 radiographs was used to find suspicious nodules. The radiologists then reviewed the images with the assistance of DCNN software.