Collaborating Authors

The Bone & Joint Journal


The diagnosis of developmental dysplasia of the hip (DDH) is challenging owing to extensive variation in paediatric pelvic anatomy. Artificial intelligence (AI) may represent an effective diagnostic tool for DDH. Here, we aimed to develop an anteroposterior pelvic radiograph deep learning system for diagnosing DDH in children and analyze the feasibility of its application. In total, 10,219 anteroposterior pelvic radiographs were retrospectively collected from April 2014 to December 2018. Radiographs were grouped according to age and into'dislocation' (dislocation and subluxation) and'non-dislocation' (normal cases and those with dysplasia of the acetabulum) groups based on clinical diagnosis.

Robust Classification from Noisy Labels: Integrating Additional Knowledge for Chest Radiography Abnormality Assessment Artificial Intelligence

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.

Automatic Detection of Inadequate Pediatric Lateral Neck Radiographs of the Airway and Soft Tissues using Deep Learning


To develop and validate a deep learning (DL) algorithm to identify poor-quality lateral airway radiographs. A total of 1200 lateral airway radiographs obtained in emergency department patients between January 1, 2000, and July 1, 2019, were retrospectively queried from the picture archiving and communication system. Two radiologists classified each radiograph as adequate or inadequate. Disagreements were adjudicated by a third radiologist. The radiographs were used to train and test the DL classifiers.

X2Teeth: 3D Teeth Reconstruction from a Single Panoramic Radiograph Artificial Intelligence

3D teeth reconstruction from X-ray is important for dental diagnosis and many clinical operations. However, no existing work has explored the reconstruction of teeth for a whole cavity from a single panoramic radiograph. Different from single object reconstruction from photos, this task has the unique challenge of constructing multiple objects at high resolutions. To conquer this task, we develop a novel ConvNet X2Teeth that decomposes the task into teeth localization and single-shape estimation. We also introduce a patch-based training strategy, such that X2Teeth can be end-to-end trained for optimal performance. Extensive experiments show that our method can successfully estimate the 3D structure of the cavity and reflect the details for each tooth. Moreover, X2Teeth achieves a reconstruction IoU of 0.681, which significantly outperforms the encoder-decoder method by $1.71X and the retrieval-based method by $1.52X. Our method can also be promising for other multi-anatomy 3D reconstruction tasks.

Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary Edema Assessment


We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only.