Goto

Collaborating Authors

 Meng, Zeyuan


Fusion of ECG Foundation Model Embeddings to Improve Early Detection of Acute Coronary Syndromes

arXiv.org Artificial Intelligence

Acute Coronary Syndrome (ACS) is a life - threatening cardiovascular condition where early and accurate diagnosis is critical for effective treatment and improved patient outcomes. This study explores the use of ECG foundation models, specifically ST - MEM and ECG - FM, to enhance ACS risk assessment using prehospital ECG data collected in the ambulances . Both models leverage self - supervised learning (SSL), with ST - MEM using a reconstruction - based approach and ECG - FM employing contrastive learning, capt uring unique spatial and temporal ECG features. We evaluate the performance of these models individually and through a fusion approach, where their embeddings are combined for enhanced prediction. Results demonstrate that both foundation models outperform a baseline ResNet - 50 model, with the fusion - based approach achieving the highest perf ormance (AUROC: 0.843 0.006, AUCPR: 0.674 0.012). These findings highlight the potential of ECG foundation models for early ACS detection and motivate further exploration of advanced fusion strategies to maximize complementary feature utilization.


VP-LLM: Text-Driven 3D Volume Completion with Large Language Models through Patchification

arXiv.org Artificial Intelligence

Recent conditional 3D completion works have mainly relied on CLIP or BERT to encode textual information, which cannot support complex instruction. Meanwhile, large language models (LLMs) have shown great potential in multi-modal understanding and generation tasks. Inspired by the recent advancements of LLM, we present Volume Patch LLM (VP-LLM), which leverages LLMs to perform conditional 3D completion in a single-forward pass. To integrate a 3D model into the LLM tokenization configuration, the incomplete 3D object is first divided into small patches that can be encoded independently. These encoded patches are then fed into an LLM along with the text prompt, instructing the LLM to capture the relations between these patches as well as injecting semantic meanings into the 3D object. Our results demonstrate a strong ability of LLMs to interpret complex text instructions and understand 3D objects, surpassing state-of-the-art diffusion-based 3D completion models in generation quality.