Goto

Collaborating Authors

 physiological signal


EEVR: A Dataset of Paired Physiological Signals and Textual Descriptions for Joint Emotion Representation Learning

Neural Information Processing Systems

EEVR (Emotion Elicitation in Virtual Reality) is a novel dataset specifically designed for language supervision-based pre-training of emotion recognition tasks, such as valence and arousal classification. It features high-quality physiological signals, including electrodermal activity (EDA) and photoplethysmography (PPG), acquired through emotion elicitation via 360-degree virtual reality (VR) videos.Additionally, it includes subject-wise textual descriptions of emotions experienced during each stimulus gathered from qualitative interviews. The dataset consists of recordings from 37 participants and is the first dataset to pair raw text with physiological signals, providing additional contextual information that objective labels cannot offer. To leverage this dataset, we introduced the Contrastive Language Signal Pre-training (CLSP) method, which jointly learns representations using pairs of physiological signals and textual descriptions. Our results show that integrating self-reported textual descriptions with physiological signals significantly improves performance on emotion recognition tasks, such as arousal and valence classification. Moreover, our pre-trained CLSP model demonstrates strong zero-shot transferability to existing datasets, outperforming supervised baseline models, suggesting that the representations learned by our method are more contextualized and generalized. The dataset also includes baseline models for arousal, valence, and emotion classification, as well as code for data cleaning and feature extraction.


Datasheet - SCAMPS

Neural Information Processing Systems

Motivation For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? SCAMPS is a dataset of high fidelity synthetic human simulations that are designed for the purposes of training and testing camera physiological measurement methods. Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The data were created by the Human Synthetics, HUE and Biomedical Computing Teams at Microsoft. Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.


HCFSLN: Adaptive Hyperbolic Few-Shot Learning for Multimodal Anxiety Detection

Sneh, Aditya, Sahu, Nilesh Kumar, Shelke, Anushka Sanjay, Adyasha, Arya, Lone, Haroon R.

arXiv.org Artificial Intelligence

Anxiety disorders impact millions globally, yet traditional diagnosis relies on clinical interviews, while machine learning models struggle with overfitting due to limited data. Large-scale data collection remains costly and time-consuming, restricting accessibility. To address this, we introduce the Hyperbolic Curvature Few-Shot Learning Network (HCFSLN), a novel Few-Shot Learning (FSL) framework for multimodal anxiety detection, integrating speech, physiological signals, and video data. HCFSLN enhances feature separability through hyperbolic embeddings, cross-modal attention, and an adaptive gating network, enabling robust classification with minimal data. We collected a multimodal anxiety dataset from 108 participants and benchmarked HCFSLN against six FSL baselines, achieving 88% accuracy, outperforming the best baseline by 14%. These results highlight the effectiveness of hyperbolic space for modeling anxiety-related speech patterns and demonstrate FSL's potential for anxiety classification.


Predicting Cognitive Assessment Scores in Older Adults with Cognitive Impairment Using Wearable Sensors

Habadi, Assma, Zefran, Milos, Yin, Lijuan, Song, Woojin, Caceres, Maria, Hu, Elise, Muramatsu, Naoko

arXiv.org Artificial Intelligence

Background and Objectives: This paper focuses on using AI to assess the cognitive function of older adults with mild cognitive impairment or mild dementia using physiological data provided by a wearable device. Cognitive screening tools are disruptive, time-consuming, and only capture brief snapshots of activity. Wearable sensors offer an attractive alternative by continuously monitoring physiological signals. This study investigated whether physiological data can accurately predict scores on established cognitive tests. Research Design and Methods: We recorded physiological signals from 23 older adults completing three NIH Toolbox Cognitive Battery tests, which assess working memory, processing speed, and attention. The Empatica EmbracePlus, a wearable device, measured blood volume pulse, skin conductance, temperature, and movement. Statistical features were extracted using wavelet-based and segmentation methods. We then applied supervised learning and validated predictions via cross-validation, hold-out testing, and bootstrapping. Results: Our models showed strong performance with Spearman's ρof 0.73-0.82 and mean absolute errors of 0.14-0.16, significantly outperforming a naive mean predictor. Sensor roles varied: heart-related signals combined with movement and temperature best predicted working memory, movement paired with skin conductance was most informative for processing speed, and heart in tandem with skin conductance worked best for attention. Discussion and Implications: These findings suggest that wearable sensors paired with AI tools such as supervised learning and feature engineering can noninvasively track specific cognitive functions in older adults, enabling continuous monitoring. Our study demonstrates how AI can be leveraged when the data sample is small. This approach may support remote assessments and facilitate clinical interventions.


Non-Contact Health Monitoring During Daily Personal Care Routines

Ma, Xulin, Tang, Jiankai, Jiang, Zhang, Cheng, Songqin, Shi, Yuanchun, LI, Dong, Liu, Xin, McDuff, Daniel, Liu, Xiaojing, Wang, Yuntao

arXiv.org Artificial Intelligence

Abstract--Remote photoplethysmography (rPPG) enables non-contact, continuous monitoring of physiological signals and offers a practical alternative to traditional health sensing methods. Although rPPG is promising for daily health monitoring, its application in long-term personal care scenarios--such as mirror-facing routines in high-altitude environments--remains challenging due to ambient lighting variations, frequent occlusions from hand movements, and dynamic facial postures. T o address these challenges, we present the Long-term Altitude Daily Health (LADH) dataset, the first long-term rPPG dataset containing 240 synchronized RGB and infrared (IR) facial videos from 21 participants across five common personal care scenarios, along with ground-truth PPG, respiration, and blood oxygen signals. Our experiments demonstrate that combining RGB and IR video inputs improves the accuracy and robustness of non-contact physiological monitoring, achieving a mean absolute error (MAE) of 4.99 BPM in heart rate estimation. Furthermore, we find that multi-task learning enhances performance across multiple physiological indicators simultaneously.


PhysioWave: A Multi-Scale Wavelet-Transformer for Physiological Signal Representation

Chen, Yanlong, Orlandi, Mattia, Rapa, Pierangelo Maria, Benatti, Simone, Benini, Luca, Li, Yawei

arXiv.org Artificial Intelligence

Physiological signals are often corrupted by motion artifacts, baseline drift, and other low-SNR disturbances, which pose significant challenges for analysis. Additionally, these signals exhibit strong non-stationarity, with sharp peaks and abrupt changes that evolve continuously, making them difficult to represent using traditional time-domain or filtering methods. To address these issues, a novel wavelet-based approach for physiological signal analysis is presented, aiming to capture multi-scale time-frequency features in various physiological signals. Leveraging this technique, two large-scale pretrained models specific to EMG and ECG are introduced for the first time, achieving superior performance and setting new baselines in downstream tasks. Additionally, a unified multi-modal framework is constructed by integrating pretrained EEG model, where each modality is guided through its dedicated branch and fused via learnable weighted fusion. This design effectively addresses challenges such as low signal-to-noise ratio, high inter-subject variability, and device mismatch, outperforming existing methods on multi-modal tasks. The proposed wavelet-based architecture lays a solid foundation for analysis of diverse physiological signals, while the multi-modal design points to next-generation physiological signal processing with potential impact on wearable health monitoring, clinical diagnostics, and broader biomedical applications. Code and data are available at: github.com/ForeverBlue816/PhysioWave


LibEMER: A novel benchmark and algorithms library for EEG-based Multimodal Emotion Recognition

Liu, Zejun, Chen, Yunshan, Xie, Chengxi, Xie, Yugui, Liu, Huan

arXiv.org Artificial Intelligence

ABSTRACT EEG-based multimodal emotion recognition(EMER) has gained significant attention and witnessed notable advancements, the inherent complexity of human neural systems has motivated substantial efforts toward multimodal approaches. However, this field currently suffers from three critical limitations: (i) the absence of open-source implementations. To address these challenges, we introduce LibEMER, a unified evaluation framework that provides fully reproducible Py-Torch implementations of curated deep learning methods alongside standardized protocols for data preprocessing, model realization, and experimental setups. This framework enables unbiased performance assessment on three widely-used public datasets across two learning tasks. The open-source library is publicly accessible at: LibEMERIndex T erms-- multimodal learning, emotion recognition, benchmark, electroencephalography(EEG), open-source library 1. INTRODUCTION EEG-based multimodal emotion recognition (EMER) represents a critical research domain within affective computing, focusing on the development of computational models for precise identification of human emotional states. As electroencephalography (EEG) provides direct measurements of cortical neural activity, it has received continuous attention.


PhysioME: A Robust Multimodal Self-Supervised Framework for Physiological Signals with Missing Modalities

Lee, Cheol-Hui, Lee, Hwa-Yeon, Jung, Min-Kyung, Kim, Dong-Joo

arXiv.org Artificial Intelligence

Missing or corrupted modalities are common in physiological signal-based medical applications owing to hardware constraints or motion artifacts. However, most existing methods assume the availability of all modalities, resulting in substantial performance degradation in the absence of any modality. To overcome this limitation, this study proposes PhysioME, a robust framework designed to ensure reliable performance under missing modality conditions. PhysioME adopts: (1) a multimodal self-supervised learning approach that combines contrastive learning with masked prediction; (2) a Dual-PathNeuroNet backbone tailored to capture the temporal dynamics of each physiological signal modality; and (3) a restoration decoder that reconstructs missing modality tokens, enabling flexible processing of incomplete inputs. The experimental results show that PhysioME achieves high consistency and generalization performance across various missing modality scenarios. These findings highlight the potential of PhysioME as a reliable tool for supporting clinical decision-making in real-world settings with imperfect data availability.