Goto

Collaborating Authors

Multimodal Learning for Cardiovascular Risk Prediction using EHR Data

arXiv.org Machine Learning

Electronic health records (EHRs) contain structured and unstructured data of significant clinical and research value. Various machine learning approaches have been developed to employ information in EHRs for risk prediction. The majority of these attempts, however, focus on structured EHR fields and lose the vast amount of information in the unstructured texts. To exploit the potential information captured in EHRs, in this study we propose a multimodal recurrent neural network model for cardiovascular risk prediction that integrates both medical texts and structured clinical information. The proposed multimodal bidirectional long short-term memory (BiLSTM) model concatenates word embeddings to classical clinical predictors before applying them to a final fully connected neural network. In the experiments, we compare performance of different deep neural network (DNN) architectures including convolutional neural network and long short-term memory in scenarios of using clinical variables and chest X-ray radiology reports. Evaluated on a data set of real world patients with manifest vascular disease or at high-risk for cardiovascular disease, the proposed BiLSTM model demonstrates state-of-the-art performance and outperforms other DNN baseline architectures.


Joint Modeling of Chest Radiographs and Radiology Reports for Pulmonary Edema Assessment

#artificialintelligence

We propose and demonstrate a novel machine learning algorithm that assesses pulmonary edema severity from chest radiographs. While large publicly available datasets of chest radiographs and free-text radiology reports exist, only limited numerical edema severity labels can be extracted from radiology reports. This is a significant challenge in learning such models for image classification. To take advantage of the rich information present in the radiology reports, we develop a neural network model that is trained on both images and free-text to assess pulmonary edema severity from chest radiographs at inference time. Our experimental results suggest that the joint image-text representation learning improves the performance of pulmonary edema assessment compared to a supervised model trained on images only.


RSNA Launches Pulmonary Embolism AI Challenge

#artificialintelligence

The Radiological Society of North America (RSNA) has launched its fourth annual artificial intelligence (AI) challenge, a competition among researchers to create applications that perform a clearly defined clinical task according to specified performance measures. The challenge for competitors this year is to create machine-learning algorithms to detect and characterize instances of pulmonary embolism. RSNA collaborated with the Society of Thoracic Radiology (STR) to create a massive dataset for the challenge. The RSNA-STR Pulmonary Embolism CT (RSPECT) dataset is comprised of more than 12,000 CT scans collected from five international research centers. The dataset was labeled with detailed clinical annotations by a group of more than 80 expert thoracic radiologists.


Radiology Initiatives Illustrate Uses for Open Data and Open AI research

#artificialintelligence

Andy OramFans of data in health care often speculate about what clinicians and researchers could achieve by reducing friction in data sharing. What if we had easy access to group repositories, expert annotations and labels, robust and consistent metadata, and standards without inconsistencies? Since 2017, the Radiological Society of North America (RSNA) has been displaying a model for such data sharing. That year marked RSNA's first AI challenge. RSNA has worked since then to make the AI challenge an increasingly international collaboration.


A Strong Baseline for Domain Adaptation and Generalization in Medical Imaging

arXiv.org Artificial Intelligence

This work provides a strong baseline for the problem of multi-source multi-target domain adaptation and generalization in medical imaging. Using a diverse collection of ten chest X-ray datasets, we empirically demonstrate the benefits of training medical imaging deep learning models on varied patient populations for generalization to out-of-sample domains.