Rich countries pour heart-stopping amounts of money into health care. Advanced economies typically spend about 10% of gdp on keeping their citizens in good nick, a share that is rising as populations age. The American system's heft and inertia, perpetuated by the drugmakers, pharmacies, insurers, hospitals and others that benefit from it, have long protected it from disruption. Its size and stodginess also explain why it is being covetously eyed by big tech. Few other industries offer a potential market large enough to move the needle for the trillion-dollar technology titans.
Background: Acute respiratory distress syndrome (ARDS) is a condition that is often considered to have broad and subjective diagnostic criteria and is associated with significant mortality and morbidity. Early and accurate prediction of ARDS and related conditions such as hypoxemia and sepsis could allow timely administration of therapies, leading to improved patient outcomes. Objective: The aim of this study is to perform an exploration of how multilabel classification in the clinical setting can take advantage of the underlying dependencies between ARDS and related conditions to improve early prediction of ARDS in patients. Methods: The electronic health record data set included 40,703 patient encounters from 7 hospitals from April 20, 2018, to March 17, 2021. A recurrent neural network (RNN) was trained using data from 5 hospitals, and external validation was conducted on data from 2 hospitals.
The Center for Computational Life Sciences at Cleveland Clinic offers an opportunity for a visionary Senior Faculty to establish our biomedical research and healthcare system as a global leader in Artificial Intelligence Research (AI). This Senior Faculty position will shape and expand AI technologies and applications centered around biomedical science and healthcare, supported by strong institutional commitment to build their vision. The Center for Computational Life Sciences will serve as a multi-institutional hub for collaborative research in computational life sciences and biomedicine positioned at the intersection of modern biology, computer science, predictive modeling, simulation, and AI. Cleveland Clinic's electronic health record (EHR) is one of the largest in the world, containing data on over 7 million patient lives. Multi omics data go back well over 20 years, comprised of over billions of data points.
For Dr. Erich Huang, Duke Health's chief data officer for quality, one issue often overlooked when discussing AI in healthcare is the importance of the user experience. "It's not just an abstract Westworld brain sitting out there," Huang says. "It has to be well integrated with clinical workflow, and nurses are essential to that." With the Sepsis Watch early warning program, Huang says, nurses were able to apply their professional experience to kick off the cascade of actions that would follow an AI-produced alert. "One of the big issues with electronic health records is fatigue from alerts," he says.
Hardly a day goes by without another revelation of race, gender, and other biases being embedded in artificial intelligence systems. Just this month, for example, Silicon Valley's much-touted AI image generation system DALL-E disclosed that its system exhibits biases including gender stereotypes and tends "to overrepresent people who are White-passing and Western concepts generally." For instance, it produces images of women for the prompt "a flight attendant" and images of men for the prompt "a builder." In the disclosure, OpenAI, the entity that trained DALL-E, says it is only releasing the program to a limited group of users while it works on mitigating bias and other risks. Meanwhile, researchers using machine learning to examine electronic health records found that Black patients were more than twice as likely to be described in derogatory terms (like "resistant" or "noncompliant") in their patient records. And those are the types of records that often make up the raw material for future AI programs, like the one that aimed to predict patient-reported pain from X-ray data but was only able to make successful predictions for White patients.
Delaying intubation for patients failing Bi-Level Positive Airway Pressure (BIPAP) may be associated with harm. The objective of this study was to develop a deep learning model capable of aiding clinical decision making by predicting Bi-Level Positive Airway Pressure (BIPAP) failure. This was a retrospective cohort study in a tertiary pediatric intensive care unit (PICU) between 2010 and 2020. Three machine learning models were developed to predict BIPAP failure: two logistic regression models and one deep learning model, a recurrent neural network with a Long Short-Term Memory (LSTM-RNN) architecture. Model performance was evaluated in a holdout test set. 175 (27.7%) of 630 total BIPAP sessions were BIPAP failures. Patients in the BIPAP failure group were on BIPAP for a median of 32.8 (9.2–91.3) hours prior to intubation. Late BIPAP failure (intubation after using BIPAP > 24 h) patients had fewer 28-day Ventilator Free Days (13.40 [0.68–20.96]), longer ICU length of stay and more post-extubation BIPAP days compared to those who were intubated ≤ 24 h from BIPAP initiation. An AUROC above 0.5 indicates that a model has extracted new information, potentially valuable to the clinical team, about BIPAP failure. Within 6 h of BIPAP initiation, the LSTM-RNN model predicted which patients were likely to fail BIPAP with an AUROC of 0.81 (0.80, 0.82), superior to all other models. Within 6 h of BIPAP initiation, the LSTM-RNN model would identify nearly 80% of BIPAP failures with a 50% false alarm rate, equal to an NNA of 2. In conclusion, a deep learning method using readily available data from the electronic health record can identify which patients on BIPAP are likely to fail with good discrimination, oftentimes days before they are intubated in usual practice.
ChristianaCare this week announced some new help to augment its workforce: robotic assistants that can help nurses and other hospital staff spend more time with patients by automatic certain time-intensive tasks. WHY IT MATTERS The technology, called Moxi, is a collaborative robot that can work alongside nurses and interact with them directly, performing nonclinical tasks such as deliveries and pickups to enable them to focus on care delivery. ChristianaCare purchased five of these 300-pound "cobots" – which can work 22-hour shifts, be fully charged in two hours and carry up to 70 pounds – with a $1.5 million grant from the American Nurses Foundation. The Moxi cobots will soon be integrated with ChristianaCare's Cerner electronic health record platform, officials say. Connected to the EHR data, the cobots can anticipate the needs of both clinicians and patients – and perform tasks without human involvement.
Investigators have identified characteristics of individuals with long COVID and those who are likely to have it by using machine learning techniques. The investigators, who were supported by the National Institutes of Health (NHI), analyzed a collection of electronic health records (EHR) available for COVID-19 research to help better identify who has long COVID. Investigators used the EHR data, from the National COVID Cohort Collaborative (N3C), a centralized national public database led by the NIH's National Centers for Advancing Translation Sciences, to identify more than 100,000 likely cases of long COVID, as of October 2021 and 200,000 cases as of May 2022. "It made sense to take advantage of modern data analysis tools and a unique big data resource like N3C, where many features of long COVID can be represented," Emily Pfaff, PhD, a clinical informaticist at the University of North Carolina at Chapel Hill, said in a statement. The N3C data includes information representing more than 13 million individuals nationwide and nearly 5 million positive COVID-19 cases.
As data analytics and other digital innovations become more widely adopted in healthcare, artificial intelligence (AI) will move from an administrative role to a clinical decision-making support role. Hospitals already use AI-based tools to develop custom care plans, check in patients for appointments and answer basic questions such as "How do I pay my bill?" AI is gaining traction as an "intelligent assistant" for physicians and clinicians. AI helps radiologists analyze images faster and organize them better. It pours through volumes of electronic medical record (EMR) data and symptoms to diagnose disease.