Multimodal signal fusion for stress detection using deep neural networks: a novel approach for converting 1D signals to unified 2D images
Hasanpoor, Yasin, Tarvirdizadeh, Bahram, Alipour, Khalil, Ghamari, Mohammad
–arXiv.org Artificial Intelligence
This study introduces a novel method that transforms multimodal physiological signals -- photoplethysmography (PPG), galvanic skin response (GSR), and acceleration (ACC) -- into 2D image matrices to enhance stress detection using convolutional neural networks (CNNs). Unlike traditional approaches that process these signals separately or rely on fixed encodings, our technique fuses them into structured image representations that enable CNNs to capture temporal and cross - signal dependencies more effectively. This image - based transformation not only improves interpretability but also serves as a rob ust form of data augmentation. To further enhance generalization and model robustness, we systematically reorganize the fused signals into multiple formats, combining them in a multi - stage training pipeline. This approach significantly boost s classification performance, with test accuracy improving from 92.57% (using individual signal orderings) to 95.86% when using the combined strategy. While demonstrated here in the context of stress detection, the proposed method is broadly applicable to any domain invo lving multimodal physiological signals, paving the way for more accurate, personalized, and real time health monitoring through wearable technologies.
arXiv.org Artificial Intelligence
Sep-18-2025
- Country:
- Asia > Middle East
- Iran > Tehran Province > Tehran (0.04)
- Europe > Switzerland (0.04)
- North America > United States
- California > San Luis Obispo County > San Luis Obispo (0.04)
- Asia > Middle East
- Genre:
- Research Report > Promising Solution (0.70)
- Industry:
- Technology: