Multi-View Contrastive Learning for Robust Domain Adaptation in Medical Time Series Analysis
–arXiv.org Artificial Intelligence
Adapting machine learning models to medical time series across different domains remains a challenge due to complex temporal dependencies and dynamic distribution shifts. Current approaches often focus on isolated feature representations, limiting their ability to fully capture the intricate temporal dynamics necessary for robust domain adaptation. In this work, we propose a novel framework leveraging multi-view contrastive learning to integrate temporal patterns, derivative-based dynamics, and frequency-domain features. Our method employs independent encoders and a hierarchical fusion mechanism to learn feature-invariant representations that are transferable across domains while preserving temporal coherence. Extensive experiments on diverse medical datasets, including electroencephalogram (EEG), electrocardiogram (ECG), and electromyography (EMG) demonstrate that our approach significantly outperforms state-of-the-art methods in transfer learning tasks. By advancing the robustness and generalizability of machine learning models, our framework offers a practical pathway for deploying reliable AI systems in diverse healthcare settings. Data and Code Availability This study uses publicly available datasets in medical and healthcare domains, including SleepEEG (Kemp et al., 2000) and ECG (Clifford et al., 2017) for pre-training, and Epilepsy (Andrzejak et al., 2001), FD (Less-meier et al., 2016), Gesture (Liu et al., 2009), and EMG (Goldberger et al., 2000) for fine-tuning. The datasets used in this study are publicly accessible via their respective repositories, with detailed documentation included in the supplementary material.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- Europe
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom > England (0.04)
- Netherlands > North Holland
- Europe
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Technology: