A Multi-Task Foundation Model for Wireless Channel Representation Using Contrastive and Masked Autoencoder Learning
Guler, Berkay, Geraci, Giovanni, Jafarkhani, Hamid
–arXiv.org Artificial Intelligence
This work has been submitted to the IEEE for possible publication. Abstract--Current applications of self-supervised learning to wireless channel representation often borrow paradigms developed for text and image processing, without fully addressing the unique characteristics and constraints of wireless communications. T o bridge this gap, we introduce ContraWiMAE, Wireless Contrastive Masked Autoencoder, a transformer-based foundation model that unifies masked reconstruction and masked contrastive learning for wireless channel representation. Our key innovation is a new wireless-inspired contrastive objective that exploits the inherent characteristics of wireless environment, including noise, fading, and partial observability, as natural augmentation. Through extensive evaluation on unseen scenarios and conditions, we demonstrate our method's effectiveness in multiple downstream tasks, including cross-frequency beam selection, line-of-sight detection, and channel estimation. ContraWiMAE exhibits superior linear separability and adaptability in diverse wireless environments, demonstrating exceptional data efficiency and competitive performance compared with supervised baselines under challenging conditions. Comparative evaluations against a state-of-the-art wireless channel foundation model confirm the superior performance and data efficiency of our approach, highlighting its potential as a powerful baseline for future research in self-supervised wireless channel representation learning. T o foster further work in this direction, we release the model weights and training pipeline for ContraWiMAE. Large-scale self-supervised pretraining has transformed the fields of natural language processing and computer vision. This paradigm leverages diverse datasets and proxy objectives to learn broadly transferable representations, in contrast to traditional task-specific training approaches [2]-[4]. By de-coupling feature learning from downstream tasks, it enables efficient, task-specific adaptation. Models following this two-stage strategy--computationally intensive pretraining followed by lightweight adaptation--are commonly referred to as foundation models [5].
arXiv.org Artificial Intelligence
Oct-23-2025
- Country:
- Africa > Middle East
- Egypt > Cairo Governorate
- Cairo (0.04)
- Morocco > Marrakesh-Safi Region
- Marrakesh (0.04)
- Egypt > Cairo Governorate
- Asia
- China
- India
- Maharashtra > Mumbai (0.04)
- NCT > New Delhi (0.04)
- Indonesia > Java
- Japan > Honshū
- Kansai > Kyoto Prefecture > Kyoto (0.04)
- Middle East
- Israel > Jerusalem District
- Jerusalem (0.04)
- Republic of Türkiye > Istanbul Province
- Istanbul (0.04)
- UAE > Dubai Emirate
- Dubai (0.04)
- Israel > Jerusalem District
- Singapore (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Europe
- Czechia > Prague (0.04)
- Middle East > Republic of Türkiye
- Istanbul Province > Istanbul (0.04)
- Finland > Uusimaa
- Helsinki (0.04)
- Italy > Sardinia (0.04)
- Portugal > Lisbon
- Lisbon (0.04)
- Iceland > Capital Region
- Reykjavik (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Spain > Galicia
- Madrid (0.04)
- Sweden > Stockholm
- Stockholm (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- Cuba > La Habana Province
- Havana (0.04)
- United States
- California
- Los Angeles County > Los Angeles (0.04)
- Orange County > Irvine (0.14)
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.04)
- Illinois > Cook County
- Chicago (0.04)
- Indiana > Marion County
- Indianapolis (0.04)
- Oklahoma > Oklahoma County
- Oklahoma City (0.04)
- California
- Canada > Quebec
- South America
- Brazil > Rio de Janeiro
- Rio de Janeiro (0.04)
- Chile > Santiago Metropolitan Region
- Santiago Province > Santiago (0.04)
- Brazil > Rio de Janeiro
- Africa > Middle East
- Genre:
- Research Report (0.83)
- Technology: