Collaborating Authors

Insights from Predicting Pediatric Asthma Exacerbations from Retrospective Clinical Data

AAAI Conferences

The paper presents ongoing issues, challenges, and difficulties we face in applying machine learning methods to retrospectively collected clinical data. The objective of our research is to build a reliable prediction model for early assessment of emergency pediatric asthma exacerbations. This predictive model should be able to distinguish between patients with mild or moderate/severe asthma attacks at a medically acceptable level of performance. Our real-life data set presents us with some difficult challenges which we communicate in this paper. Our approach to overcoming some of these difficulties is to use external expert knowledge to aid with classification by decomposing the classification problem into a two-tier concept, where concepts can be explicitly described in terms of the external knowledge source. Such an approach also has the advantage of significantly reducing the size of the training set required.

Visualization of Emergency Department Clinical Data for Interpretable Patient Phenotyping Machine Learning

Visualization of Emergency Department Clinical Data for Interpretable Patient Phenotyping null Nathan C. Hurley a,, Adrian D. Haimovich b, R. Andrew Taylor b, Bobak J. Mortazavi a a Department of Computer Science and Engineering, T exas A&M University, United States b Department of Emergency Medicine, Y ale School of Medicine, United StatesAbstract Visual summarization of clinical data collected on patients contained within the electronic health record (EHR) may enable precise and rapid triage at the time of patient presentation to an emergency department (ED). The triage process is critical in the appropriate allocation of resources and in anticipating eventual patient disposition, typically admission to the hospital or discharge home. EHR data are high-dimensional and complex, but offer the opportunity to discover and characterize underlying data-driven patient phenotypes. Data-driven phenotypes are intended to relieve reliance on weak labels like diagnosis codes and to aid in identifying populations of existing patients that are most similar to a specific patient. These phenotypes will enable improved, personalized therapeutic decision making and prognostication. In this work, we focus on the challenge of two-dimensional patient projections. A low dimensional embedding offers visual interpretability lost in higher dimensions. While linear dimensionality reduction techniques such as principal component analysis are often used towards this aim, they are insufficient to describe the variance of patient data. This linear reduction does not account for higher order, nonlinear interactions of variables. In this work, we employ the newly-described nonlinear embedding technique called uniform manifold approximation and projection (UMAP). UMAP seeks to capture both local and global structures in high-dimensional data. We then use Gaussian mixture models to identify clusters in the embedded data and use the adjusted Rand index (ARI) to establish stability in the discovery of these clusters. This technique is applied to five common clinical chief complaints from a real-world ED EHR dataset, describing the emergent properties of discovered clusters. We observe clinically-relevant cluster attributes, suggesting that visual embeddings of EHR data using nonlinear dimensionality reduction is a promising approach to reveal data-driven patient phenotypes. In the five chief complaints, we find between 2 and 6 clusters, with the peak mean pairwise ARI between subsequent training iterations to range from 0.35 to 0.74. Introduction Electronic health records (EHRs) include heterogeneous data that represent past and ongoing patient care episodes.

AFGuide System to Support Personalized Management of Atrial Fibrillation

AAAI Conferences

Atrial fibrillation (AF), the most common arrhythmia with clinical significance, is a serious public health problem. Yet a number of studies show that current AF management is suboptimal due to a knowledge gap between primary care physicians and evidence-based treatment recommendations. This gap is caused by a number of barriers such as a lack of knowledge about new therapies, challenges associated with multi-morbidity, or a lack of patient engagement in therapy planning. The decision support tools proposed to address these barriers handle individual barriers but none of them tackle them comprehensively. Responding to this challenge, we propose AFGuide -- a clinical decision support system to educate and support primary care physicians in developing evidence-based and optimal AF therapies that take into account multi-morbid conditions and patient preferences. AFGuide relies on artificial intelligence techniques (logical reasoning) and preference modeling techniques, and combines them with mobile computing technologies. In this paper we present the design of the system and discuss its proposed implementation and evaluation.

Improving Emergency Department ESI Acuity Assignment Using Machine Learning and Clinical Natural Language Processing Machine Learning

Effective triage is critical to mitigating the effect of increased volume by accurately determining patient acuity, need for resources, and establishing effective acuity-based patient prioritization. The purpose of this retrospective study was to determine whether historical EHR data can be extracted and synthesized with clinical natural language processing (C-NLP) and the latest ML algorithms (KATE) to produce highly accurate ESI predictive models. An ML model (KATE) for the triage process was developed using 166,175 patient encounters from two participating hospitals. The model was then tested against a gold set that was derived from a random sample of triage encounters at the study sites and correct acuity assignments were recorded by study clinicians using the Emergency Severity Index (ESI) standard as a guide. At the two study sites, KATE predicted accurate ESI acuity assignments 75.9% of the time, compared to nurses (59.8%) and average individual study clinicians (75.3%). KATE accuracy was 26.9% higher than the average nurse accuracy (p-value < 0.0001). On the boundary between ESI 2 and ESI 3 acuity assignments, which relates to the risk of decompensation, KATE was 93.2% higher with 80% accuracy, compared to triage nurses with 41.4% accuracy (p-value < 0.0001). KATE provides a triage acuity assignment substantially more accurate than the triage nurses in this study sample. KATE operates independently of contextual factors, unaffected by the external pressures that can cause under triage and may mitigate the racial and social biases that can negatively affect the accuracy of triage assignment. Future research should focus on the impact of KATE providing feedback to triage nurses in real time, KATEs impact on mortality and morbidity, ED throughput, resource optimization, and nursing outcomes.

A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis Artificial Intelligence

Online symptom checkers have significant potential to improve patient care, however their reliability and accuracy remain variable. We hypothesised that an artificial intelligence (AI) powered triage and diagnostic system would compare favourably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by both an AI system and human doctors. Differential diagnoses and triage outcomes were evaluated by an independent judge, who was blinded from knowing the source (AI system or human doctor) of the outcomes. Independently of these cases, vignettes from publicly available resources were also assessed to provide a benchmark to previous studies and the diagnostic component of the MRCGP exam. Overall we found that the Babylon AI powered Triage and Diagnostic System was able to identify the condition modelled by a clinical vignette with accuracy comparable to human doctors (in terms of precision and recall). In addition, we found that the triage advice recommended by the AI System was, on average, safer than that of human doctors, when compared to the ranges of acceptable triage provided by independent expert judges, with only a minimal reduction in appropriateness.