GPT-PPG: A GPT-based Foundation Model for Photoplethysmography Signals

Chen, Zhaoliang, Ding, Cheng, Kataria, Saurabh, Yan, Runze, Wang, Minxiao, Lee, Randall, Hu, Xiao

arXiv.org Artificial Intelligence 

This study introduces a novel application of a Generative Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foundation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demonstrates promising results. Our models are pre-trained on our extensive dataset that contains more than 200 million 30s PPG samples. We explored different supervised fine-tuning techniques to adapt our model to downstream tasks, resulting in performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like atrial fibrillation detection. A standout feature of our GPT model is its inherent capability to perform generative tasks such as signal denoising effectively, without the need for further fine-tuning. This success is attributed to the generative nature of the GPT framework. Keywords: Foundation model, PPG, Generative Pre-trained Transformer 1. Introduction The emergence of large language models (LLMs) such as BERT [1] and GPT [2] has revolutionized the field of artificial intelligence by introducing the concept of foundation models. These models, characterized by extensive pre-training on large datasets without explicit supervision, demonstrate remarkable versatility across downstream tasks via fine-tuning.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found