Goto

Collaborating Authors

 hospitalization


Analysis of heart failure patient trajectories using sequence modeling

Dippel, Falk, Yu, Yinan, Rosengren, Annika, Lindgren, Martin, Lundberg, Christina E., Aerts, Erik, Adiels, Martin, Sjöland, Helen

arXiv.org Artificial Intelligence

Transformers have defined the state-of-the-art for clinical prediction tasks involving electronic health records (EHRs). The recently introduced Mamba architecture outperformed an advanced Transformer (Transformer++) based on Llama in handling long context lengths, while using fewer model parameters. Despite the impressive performance of these architectures, a systematic approach to empirically analyze model performance and efficiency under various settings is not well established in the medical domain. The performances of six sequence models were investigated across three architecture classes (Transformers, Transformers++, Mambas) in a large Swedish heart failure (HF) cohort (N = 42820), providing a clinically relevant case study. Patient data included diagnoses, vital signs, laboratories, medications and procedures extracted from in-hospital EHRs. The models were evaluated on three one-year prediction tasks: clinical instability (a readmission phenotype) after initial HF hospitalization, mortality after initial HF hospitalization and mortality after latest hospitalization. Ablations account for modifications of the EHR-based input patient sequence, architectural model configurations, and temporal preprocessing techniques for data collection. Llama achieves the highest predictive discrimination, best calibration, and showed robustness across all tasks, followed by Mambas. Both architectures demonstrate efficient representation learning, with tiny configurations surpassing other large-scaled Transformers. At equal model size, Llama and Mambas achieve superior performance using 25% less training data. This paper presents a first ablation study with systematic design choices for input tokenization, model configuration and temporal data preprocessing. Future model development in clinical prediction tasks using EHRs could build upon this study's recommendation as a starting point.




AI for pRedicting Exacerbations in KIDs with aSthma (AIRE-KIDS)

Ooi, Hui-Lee, Mitsakakis, Nicholas, Dastarac, Margerie Huet, Zemek, Roger, Plint, Amy C., Gilchrist, Jeff, Emam, Khaled El, Radhakrishnan, Dhenuka

arXiv.org Artificial Intelligence

Recurrent exacerbations remain a common yet preventable outcome for many children with asthma. Machine learning (ML) algorithms using electronic medical records (EMR) could allow accurate identification of children at risk for exacerbations and facilitate referral for preventative comprehensive care to avoid this morbidity. We developed ML algorithms to predict repeat severe exacerbations (i.e. asthma-related emergency department (ED) visits or future hospital admissions) for children with a prior asthma ED visit at a tertiary care children's hospital. Retrospective pre-COVID19 (Feb 2017 - Feb 2019, N=2716) Epic EMR data from the Children's Hospital of Eastern Ontario (CHEO) linked with environmental pollutant exposure and neighbourhood marginalization information was used to train various ML models. We used boosted trees (LGBM, XGB) and 3 open-source large language model (LLM) approaches (DistilGPT2, Llama 3.2 1B and Llama-8b-UltraMedical). Models were tuned and calibrated then validated in a second retrospective post-COVID19 dataset (Jul 2022 - Apr 2023, N=1237) from CHEO. Models were compared using the area under the curve (AUC) and F1 scores, with SHAP values used to determine the most predictive features. The LGBM ML model performed best with the most predictive features in the final AIRE-KIDS_ED model including prior asthma ED visit, the Canadian triage acuity scale, medical complexity, food allergy, prior ED visits for non-asthma respiratory diagnoses, and age for an AUC of 0.712, and F1 score of 0.51. This is a nontrivial improvement over the current decision rule which has F1=0.334. While the most predictive features in the AIRE-KIDS_HOSP model included medical complexity, prior asthma ED visit, average wait time in the ED, the pediatric respiratory assessment measure score at triage and food allergy.


SimulRAG: Simulator-based RAG for Grounding LLMs in Long-form Scientific QA

Xu, Haozhou, Wu, Dongxia, Chinazzi, Matteo, Niu, Ruijia, Yu, Rose, Ma, Yi-An

arXiv.org Artificial Intelligence

Large language models (LLMs) show promise in solving scientific problems. They can help generate long-form answers for scientific questions, which are crucial for comprehensive understanding of complex phenomena that require detailed explanations spanning multiple interconnected concepts and evidence. However, LLMs often suffer from hallucination, especially in the challenging task of long-form scientific question answering. Retrieval-Augmented Generation (RAG) approaches can ground LLMs by incorporating external knowledge sources to improve trustworthiness. In this context, scientific simulators, which play a vital role in validating hypotheses, offer a particularly promising retrieval source to mitigate hallucination and enhance answer factuality. However, existing RAG approaches cannot be directly applied for scientific simulation-based retrieval due to two fundamental challenges: how to retrieve from scientific simulators, and how to efficiently verify and update long-form answers. To overcome these challenges, we propose the simulator-based RAG framework (SimulRAG) and provide a long-form scientific QA benchmark covering climate science and epidemiology with ground truth verified by both simulations and human annotators. In this framework, we propose a generalized simulator retrieval interface to transform between textual and numerical modalities. We further design a claim-level generation method that utilizes uncertainty estimation scores and simulator boundary assessment (UE+SBA) to efficiently verify and update claims. Extensive experiments demonstrate SimulRAG outperforms traditional RAG baselines by 30.4% in informativeness and 16.3% in factuality. UE+SBA further improves efficiency and quality for claim-level generation.


Evaluating Retrieval-Augmented Generation vs. Long-Context Input for Clinical Reasoning over EHRs

Myers, Skatje, Dligach, Dmitriy, Miller, Timothy A., Barr, Samantha, Gao, Yanjun, Churpek, Matthew, Mayampurath, Anoop, Afshar, Majid

arXiv.org Artificial Intelligence

Electronic health records (EHRs) are long, noisy, and often redundant, posing a major challenge for the clinicians who must navigate them. Large language models (LLMs) offer a promising solution for extracting and reasoning over this unstructured text, but the length of clinical notes often exceeds even state-of-the-art models' extended context windows. Retrieval-augmented generation (RAG) offers an alternative by retrieving task-relevant passages from across the entire EHR, potentially reducing the amount of required input tokens. In this work, we propose three clinical tasks designed to be replicable across health systems with minimal effort: 1) extracting imaging procedures, 2) generating timelines of antibiotic use, and 3) identifying key diagnoses. Using EHRs from actual hospitalized patients, we test three state-of-the-art LLMs with varying amounts of provided context, using either targeted text retrieval or the most recent clinical notes. We find that RAG closely matches or exceeds the performance of using recent notes, and approaches the performance of using the models' full context while requiring drastically fewer input tokens. Our results suggest that RAG remains a competitive and efficient approach even as newer models become capable of handling increasingly longer amounts of text.




Integrating Spatiotemporal Features in LSTM for Spatially Informed COVID-19 Hospitalization Forecasting

Wang, Zhongying, Ngo, Thoai D., Zoraghein, Hamidreza, Lucas, Benjamin, Karimzadeh, Morteza

arXiv.org Artificial Intelligence

Despite the end of the pandemic phase and declining mortality rates, COVID-19 remains a significant global health concern. According to the Centers for Disease Control and Prevention (CDC) COVID-19 Dashboard, the disease exhibited a peak weekly test positivity of 18% in the U.S. in 2024. Although the recorded hospitalization rate of 4.8 per 10,000 population on August 10, 2024, may appear comparatively low, it underscores the continuing impact of the disease. According to communications received from the CDC, hospitals are mandated to report COVID-19 hospitalizations again starting in mid-November 2024, indicating the resurgence of the disease. The COVID-19 pandemic strained healthcare resources and overloaded hospitals, exacerbating the dramatic loss of human life. SARS-CoV-2 spreads rapidly, causing severe complications due to its high reproduction rate, the ability to spread via asymptomatic individuals, the prevalence of close-contact settings in densely populated areas, continual mutation into more transmissible variants, and the inconsistent application of preventive public health measures across the U.S. As a result, the demand for travel nurses surged during the pandemic, aligning with shifts in COVID-19 infection hotspots (Cole et al. 2021, Longyear et al. 2020). This was partially a geospatial problem related to the timely allocation of limited human and medical resources. Reliable geographic forecasting of COVID-19 hospital admissions could have alleviated this burden through policy-relevant decision-making and proactive allocation of resources in regional hotspots (i.e.


Machine Learning and Statistical Insights into Hospital Stay Durations: The Italian EHR Case

Andric, Marina, Dragoni, Mauro

arXiv.org Artificial Intelligence

Length of hospital stay is a critical metric for assessing healthcare quality and optimizing hospital resource management. This study aims to identify factors influencing LoS within the Italian healthcare context, using a dataset of hospitalization records from over 60 healthcare facilities in the Piedmont region, spanning from 2020 to 2023. We explored a variety of features, including patient characteristics, comorbidities, admission details, and hospital-specific factors. Significant correlations were found between LoS and features such as age group, comorbidity score, admission type, and the month of admission. Machine learning models, specifically CatBoost and Random Forest, were used to predict LoS. The highest R2 score, 0.49, was achieved with CatBoost, demonstrating good predictive performance.