GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals
Chen, Zhaoliang, Ding, Cheng, Kataria, Saurabh, Yan, Runze, Wang, Minxiao, Lee, Randall, Hu, Xiao
–arXiv.org Artificial Intelligence
This study introduces a novel application of a Generative Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foundation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demonstrates promising results. Our models are pre-trained on our extensive dataset that contains more than 200 million 30s PPG samples. We explored different supervised fine-tuning techniques to adapt our model to downstream tasks, resulting in performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like atrial fibrillation detection. A standout feature of our GPT model is its inherent capability to perform generative tasks such as signal denoising effectively, without the need for further fine-tuning. This success is attributed to the generative nature of the GPT framework. Keywords: Foundation model, PPG, Generative Pre-trained Transformer 1. Introduction The emergence of large language models (LLMs) such as BERT [1] and GPT [2] has revolutionized the field of artificial intelligence by introducing the concept of foundation models. These models, characterized by extensive pre-training on large datasets without explicit supervision, demonstrate remarkable versatility across downstream tasks via fine-tuning.
arXiv.org Artificial Intelligence
Mar-10-2025
- Country:
- North America > United States > California > San Francisco County > San Francisco (0.28)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (1.00)
- Health Care Technology (1.00)
- Therapeutic Area
- Cardiology/Vascular Diseases (1.00)
- Hematology (1.00)
- Health & Medicine
- Technology: