You, Xiaohu
Provable Performance Bounds for Digital Twin-driven Deep Reinforcement Learning in Wireless Networks: A Novel Digital-Twin Bisimulation Metric
Tao, Zhenyu, Xu, Wei, You, Xiaohu
--Digital twin (DT)-driven deep reinforcement learning (DRL) has emerged as a promising paradigm for wireless network optimization, offering safe and efficient training environment for policy exploration. However, in theory existing methods cannot always guarantee real-world performance of DT - trained policies before actual deployment, due to the absence of a universal metric for assessing DT's ability to support reliable DRL training transferrable to physical networks. In this paper, we propose the DT bisimulation metric (DT -BSM), a novel metric based on the Wasserstein distance, to quantify the discrepancy between Markov decision processes (MDPs) in both the DT and the corresponding real-world wireless network environment. We prove that for any DT -trained policy, the sub-optimality of its performance (regret) in the real-world deployment is bounded by a weighted sum of the DT -BSM and its sub-optimality within the MDP in the DT . Then, a modified DT -BSM based on the total variation distance is also introduced to avoid the prohibitive calculation complexity of Wasserstein distance for large-scale wireless network scenarios. Further, to tackle the challenge of obtaining accurate transition probabilities of the MDP in real world for the DT -BSM calculation, we propose an empirical DT - BSM method based on statistical sampling. We prove that the empirical DT -BSM always converges to the desired theoretical one, and quantitatively establish the relationship between the required sample size and the target level of approximation accuracy. Index T erms --Digital twin, Markov decision process (MDP), deep reinforcement learning (DRL), transfer learning, bisimula-tion metric. HE long-term evolution of cellular networks, marked by growing scale, density, and heterogeneity, substantially increases the difficulty of wireless network optimization [1]. Deep reinforcement learning (DRL) emerges as a promising solution for tackling extensive state and action spaces and nonconvex optimization problems. It has been successfully applied to various network optimization tasks, such as admission control [2], resource allocation [3], node selection [4], and task offloading [5] in wireless networks. Z. Tao, W . Xu, and X. Y ou are with the National Mobile Communications Research Lab, Southeast University, Nanjing 210096, China, and also with the Pervasive Communication Research Center, Purple Mountain Laboratories, Nanjing 211111, China (email: {zhenyu tao, wxu, xhyu }@seu.edu.cn). To overcome these issues, the concept of digital twin (DT) has been introduced [7].
Overview of AI and Communication for 6G Network: Fundamentals, Challenges, and Future Research Opportunities
Cui, Qimei, You, Xiaohu, Ni, Wei, Nan, Guoshun, Zhang, Xuefei, Zhang, Jianhua, Lyu, Xinchen, Ai, Ming, Tao, Xiaofeng, Feng, Zhiyong, Zhang, Ping, Wu, Qingqing, Tao, Meixia, Huang, Yongming, Huang, Chongwen, Liu, Guangyi, Peng, Chenghui, Pan, Zhiwen, Sun, Tao, Niyato, Dusit, Chen, Tao, Khan, Muhammad Khurram, Jamalipour, Abbas, Guizani, Mohsen, Yuen, Chau
With the growing demand for seamless connectivity and intelligent communication, the integration of artificial intelligence (AI) and sixth-generation (6G) communication networks has emerged as a transformative paradigm. By embedding AI capabilities across various network layers, this integration enables optimized resource allocation, improved efficiency, and enhanced system robust performance, particularly in intricate and dynamic environments. This paper presents a comprehensive overview of AI and communication for 6G networks, with a focus on emphasizing their foundational principles, inherent challenges, and future research opportunities. We first review the integration of AI and communications in the context of 6G, exploring the driving factors behind incorporating AI into wireless communications, as well as the vision for the convergence of AI and 6G. The discourse then transitions to a detailed exposition of the envisioned integration of AI within 6G networks, delineated across three progressive developmental stages. The first stage, AI for Network, focuses on employing AI to augment network performance, optimize efficiency, and enhance user service experiences. The second stage, Network for AI, highlights the role of the network in facilitating and buttressing AI operations and presents key enabling technologies, such as digital twins for AI and semantic communication. In the final stage, AI as a Service, it is anticipated that future 6G networks will innately provide AI functions as services, supporting application scenarios like immersive communication and intelligent industrial robots. In addition, we conduct an in-depth analysis of the critical challenges faced by the integration of AI and communications in 6G. Finally, we outline promising future research opportunities that are expected to drive the development and refinement of AI and 6G communications.
Distributed satellite information networks: Architecture, enabling technologies, and trends
Zhang, Qinyu, Xu, Liang, Huang, Jianhao, Yang, Tao, Jiao, Jian, Wang, Ye, Shi, Yao, Zhang, Chiya, Zhang, Xingjian, Zhang, Ke, Gong, Yupeng, Deng, Na, Zhao, Nan, Gao, Zhen, Han, Shujun, Xu, Xiaodong, You, Li, Wang, Dongming, Jiang, Shan, Zhao, Dixian, Zhang, Nan, Hu, Liujun, He, Xiongwen, Li, Yonghui, Gao, Xiqi, You, Xiaohu
Driven by the vision of ubiquitous connectivity and wireless intelligence, the evolution of ultra-dense constellation-based satellite-integrated Internet is underway, now taking preliminary shape. Nevertheless, the entrenched institutional silos and limited, nonrenewable heterogeneous network resources leave current satellite systems struggling to accommodate the escalating demands of next-generation intelligent applications. In this context, the distributed satellite information networks (DSIN), exemplified by the cohesive clustered satellites system, have emerged as an innovative architecture, bridging information gaps across diverse satellite systems, such as communication, navigation, and remote sensing, and establishing a unified, open information network paradigm to support resilient space information services. This survey first provides a profound discussion about innovative network architectures of DSIN, encompassing distributed regenerative satellite network architecture, distributed satellite computing network architecture, and reconfigurable satellite formation flying, to enable flexible and scalable communication, computing and control. The DSIN faces challenges from network heterogeneity, unpredictable channel dynamics, sparse resources, and decentralized collaboration frameworks. To address these issues, a series of enabling technologies is identified, including channel modeling and estimation, cloud-native distributed MIMO cooperation, grant-free massive access, network routing, and the proper combination of all these diversity techniques. Furthermore, to heighten the overall resource efficiency, the cross-layer optimization techniques are further developed to meet upper-layer deterministic, adaptive and secure information services requirements. In addition, emerging research directions and new opportunities are highlighted on the way to achieving the DSIN vision.
Model-driven deep neural network for enhanced direction finding with commodity 5G gNodeB
Liu, Shengheng, Mao, Zihuan, Li, Xingkang, Pan, Mengguan, Liu, Peng, Huang, Yongming, You, Xiaohu
Pervasive and high-accuracy positioning has become increasingly important as a fundamental enabler for intelligent connected devices in mobile networks. Nevertheless, current wireless networks heavily rely on pure model-driven techniques to achieve positioning functionality, often succumbing to performance deterioration due to hardware impairments in practical scenarios. Here we reformulate the direction finding or angle-of-arrival (AoA) estimation problem as an image recovery task of the spatial spectrum and propose a new model-driven deep neural network (MoD-DNN) framework. The proposed MoD-DNN scheme comprises three modules: a multi-task autoencoder-based beamformer, a coarray spectrum generation module, and a model-driven deep learning-based spatial spectrum reconstruction module. Our technique enables automatic calibration of angular-dependent phase error thereby enhancing the resilience of direction-finding precision against realistic system non-idealities. We validate the proposed scheme both using numerical simulations and field tests. The results show that the proposed MoD-DNN framework enables effective spectrum calibration and accurate AoA estimation. To the best of our knowledge, this study marks the first successful demonstration of hybrid data-and-model-driven direction finding utilizing readily available commodity 5G gNodeB.
Large Generative Model-assisted Talking-face Semantic Communication System
Jiang, Feibo, Tu, Siwei, Dong, Li, Pan, Cunhua, Wang, Jiangzhou, You, Xiaohu
The rapid development of generative Artificial Intelligence (AI) continually unveils the potential of Semantic Communication (SemCom). However, current talking-face SemCom systems still encounter challenges such as low bandwidth utilization, semantic ambiguity, and diminished Quality of Experience (QoE). This study introduces a Large Generative Model-assisted Talking-face Semantic Communication (LGM-TSC) System tailored for the talking-face video communication. Firstly, we introduce a Generative Semantic Extractor (GSE) at the transmitter based on the FunASR model to convert semantically sparse talking-face videos into texts with high information density. Secondly, we establish a private Knowledge Base (KB) based on the Large Language Model (LLM) for semantic disambiguation and correction, complemented by a joint knowledge base-semantic-channel coding scheme. Finally, at the receiver, we propose a Generative Semantic Reconstructor (GSR) that utilizes BERT-VITS2 and SadTalker models to transform text back into a high-QoE talking-face video matching the user's timbre. Simulation results demonstrate the feasibility and effectiveness of the proposed LGM-TSC system.
Parallel Digital Twin-driven Deep Reinforcement Learning for User Association and Load Balancing in Dynamic Wireless Networks
Tao, Zhenyu, Xu, Wei, You, Xiaohu
Optimization of user association in a densely deployed heterogeneous cellular network is usually challenging and even more complicated due to the dynamic nature of user mobility and fluctuation in user counts. While deep reinforcement learning (DRL) emerges as a promising solution, its application in practice is hindered by high trial-and-error costs in real world and unsatisfactory physical network performance during training. In addition, existing DRL-based user association methods are usually only applicable to scenarios with a fixed number of users due to convergence and compatibility challenges. In this paper, we propose a parallel digital twin (DT)-driven DRL method for user association and load balancing in networks with both dynamic user counts, distribution, and mobility patterns. Our method employs a distributed DRL strategy to handle varying user numbers and exploits a refined neural network structure for faster convergence. To address these DRL training-related challenges, we devise a high-fidelity DT construction technique, featuring a zero-shot generative user mobility model, named Map2Traj, based on a diffusion model. Map2Traj estimates user trajectory patterns and spatial distributions solely from street maps. Armed with this DT environment, DRL agents are enabled to be trained without the need for interactions with the physical network. To enhance the generalization ability of DRL models for dynamic scenarios, a parallel DT framework is further established to alleviate strong correlation and non-stationarity in single-environment training and improve the training efficiency. Numerical results show that the proposed parallel DT-driven DRL method achieves closely comparable performance to real environment training, and even outperforms those trained in a single real-world environment with nearly 20% gain in terms of cell-edge user performance.
Learning Wireless Data Knowledge Graph for Green Intelligent Communications: Methodology and Experiments
Huang, Yongming, You, Xiaohu, Zhan, Hang, He, Shiwen, Fu, Ningning, Xu, Wei
Intelligent communications have played a pivotal role in shaping the evolution of 6G networks. Native artificial intelligence (AI) within green communication systems must meet stringent real-time requirements. To achieve this, deploying lightweight and resource-efficient AI models is necessary. However, as wireless networks generate a multitude of data fields and indicators during operation, only a fraction of them imposes significant impact on the network AI models. Therefore, real-time intelligence of communication systems heavily relies on a small but critical set of the data that profoundly influences the performance of network AI models. These challenges underscore the need for innovative architectures and solutions. In this paper, we propose a solution, termed the pervasive multi-level (PML) native AI architecture, which integrates the concept of knowledge graph (KG) into the intelligent operational manipulations of mobile networks, resulting in the establishment of a wireless data KG. Leveraging the wireless data KG, we characterize the massive and complex data collected from wireless communication networks and analyze the relationships among various data fields. The obtained graph of data field relations enables the on-demand generation of minimal and effective datasets, referred to as feature datasets, tailored to specific application requirements. Consequently, this architecture not only enhances AI training, inference, and validation processes but also significantly reduces resource wastage and overhead for communication networks. To implement this architecture, we have developed a specific solution comprising a spatio-temporal heterogeneous graph attention neural network model (STREAM) as well as a feature dataset generation algorithm. Experiments are conducted to validate the effectiveness of the proposed architecture.
Network-Assisted Full-Duplex Cell-Free mmWave Networks: Hybrid MIMO Processing and Multi-Agent DRL-Based Power Allocation
Fan, Qingrui, Zhang, Yu, Li, Jiamin, Wang, Dongming, Zhang, Hongbiao, You, Xiaohu
This paper investigates the network-assisted full-duplex (NAFD) cell-free millimeter-wave (mmWave) networks, where the distribution of the transmitting access points (T-APs) and receiving access points (R-APs) across distinct geographical locations mitigates cross-link interference, facilitating the attainment of a truly flexible duplex mode. To curtail deployment expenses and power consumption for mmWave band operations, each AP incorporates a hybrid digital-analog structure encompassing precoder/combiner functions. However, this incorporation introduces processing intricacies within channel estimation and precoding/combining design. In this paper, we first present a hybrid multiple-input multiple-output (MIMO) processing framework and derive explicit expressions for both uplink and downlink achievable rates. Then we formulate a power allocation problem to maximize the weighted bidirectional sum rates. To tackle this non-convex problem, we develop a collaborative multi-agent deep reinforcement learning (MADRL) algorithm called multi-agent twin delayed deep deterministic policy gradient (MATD3) for NAFD cell-free mmWave networks. Specifically, given the tightly coupled nature of both uplink and downlink power coefficients in NAFD cell-free mmWave networks, the MATD3 algorithm resolves such coupled conflicts through an interactive learning process between agents and the environment. Finally, the simulation results validate the effectiveness of the proposed channel estimation methods within our hybrid MIMO processing paradigm, and demonstrate that our MATD3 algorithm outperforms both multi-agent deep deterministic policy gradient (MADDPG) and conventional power allocation strategies.
Large Generative Model Assisted 3D Semantic Communication
Jiang, Feibo, Peng, Yubo, Dong, Li, Wang, Kezhi, Yang, Kun, Pan, Cunhua, You, Xiaohu
Semantic Communication (SC) is a novel paradigm for data transmission in 6G. However, there are several challenges posed when performing SC in 3D scenarios: 1) 3D semantic extraction; 2) Latent semantic redundancy; and 3) Uncertain channel estimation. To address these issues, we propose a Generative AI Model assisted 3D SC (GAM-3DSC) system. Firstly, we introduce a 3D Semantic Extractor (3DSE), which employs generative AI models, including Segment Anything Model (SAM) and Neural Radiance Field (NeRF), to extract key semantics from a 3D scenario based on user requirements. The extracted 3D semantics are represented as multi-perspective images of the goal-oriented 3D object. Then, we present an Adaptive Semantic Compression Model (ASCM) for encoding these multi-perspective images, in which we use a semantic encoder with two output heads to perform semantic encoding and mask redundant semantics in the latent semantic space, respectively. Next, we design a conditional Generative adversarial network and Diffusion model aided-Channel Estimation (GDCE) to estimate and refine the Channel State Information (CSI) of physical channels. Finally, simulation results demonstrate the advantages of the proposed GAM-3DSC system in effectively transmitting the goal-oriented 3D scenario.
Digital versus Analog Transmissions for Federated Learning over Wireless Networks
Yao, Jiacheng, Xu, Wei, Yang, Zhaohui, You, Xiaohu, Bennis, Mehdi, Poor, H. Vincent
In this paper, we quantitatively compare these two effective communication schemes, i.e., digital and analog ones, for wireless federated learning (FL) over resource-constrained networks, highlighting their essential differences as well as their respective application scenarios. We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints. A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks. These analytical results reveal that the fundamental difference between the two paradigms lies in whether communication and computation are jointly designed or not. The digital schemes decouple the communication design from specific FL tasks, making it difficult to support simultaneous uplink transmission of massive devices with limited bandwidth. In contrast, the analog communication allows over-the-air computation (AirComp), thus achieving efficient spectrum utilization. However, computation-oriented analog transmission reduces power efficiency, and its performance is sensitive to computational errors. Finally, numerical simulations are conducted to verify these theoretical observations.