Sensing and Understanding the World over Air: A Large Multimodal Model for Mobile Networks

Duan, Zhuoran, Wei, Yuhao, Nan, Guoshun, Wang, Zijun, Yan, Yan, Xiong, Lihua, Ran, Yuhan, Zhang, Ji, Li, Jian, Cui, Qimei, Tao, Xiaofeng, Quek, Tony Q. S.

arXiv.org Artificial Intelligence 

Abstract--Large models (LMs), such as ChatGPT, have made a significant impact across diverse domains and hold great potential to facilitate the evolution of network intelligence. Wireless-native multi-modal large models (WMLMs) can sense and understand the physical world through multi-modal data, serving as a key enabler that integrates communication, sensing, and intelligence, and thus they can boost various smart services to billions of users. However, research on WMLMs remains in its infancy, and the construction of domain-specific multi-modal large models for wireless networks is still underexplored. In this paper, we outlines the key characteristics of WMLMs and summarizes existing methods, on the basis of which a wireless-native multimodal training paradigm is proposed. Specifically, we constructed a GPT -style WMLM model and trained it on a real-world large-scale dataset, leveraging wireless signals as an anchor modality for contrastive learning. Our approach demonstrates outstanding performance compared with existing small-scale models and large multi-modal models, validating the feasibility of using wireless signals as a universal modality and highlighting WMLM's potential to emerge as a new paradigm for future wireless networks. The advent of large AI models (LMs) such as ChatGPT has propelled network intelligence into a new evolutionary phase. These remarkable enablers are poised to revolutionize future wireless networks through their advanced performance and generalization capability.