Cai, Yunlong
Online Conformal Probabilistic Numerics via Adaptive Edge-Cloud Offloading
Hou, Qiushuo, Park, Sangwoo, Zecchin, Matteo, Cai, Yunlong, Yu, Guanding, Simeone, Osvaldo
Consider an edge computing setting in which a user submits queries for the solution of a linear system to an edge processor, which is subject to time-varying computing availability. The edge processor applies a probabilistic linear solver (PLS) so as to be able to respond to the user's query within the allotted time and computing budget. Feedback to the user is in the form of an uncertainty set. Due to model misspecification, the uncertainty set obtained via a direct application of PLS does not come with coverage guarantees with respect to the true solution of the linear system. This work introduces a new method to calibrate the uncertainty sets produced by PLS with the aim of guaranteeing long-term coverage requirements. The proposed method, referred to as online conformal prediction-PLS (OCP-PLS), assumes sporadic feedback from cloud to edge. This enables the online calibration of uncertainty thresholds via online conformal prediction (OCP), an online optimization method previously studied in the context of prediction models. The validity of OCP-PLS is verified via experiments that bring insights into trade-offs between coverage, prediction set size, and cloud usage.
What If We Had Used a Different App? Reliable Counterfactual KPI Analysis in Wireless Systems
Hou, Qiushuo, Park, Sangwoo, Zecchin, Matteo, Cai, Yunlong, Yu, Guanding, Simeone, Osvaldo
In modern wireless network architectures, such as Open Radio Access Network (O-RAN), the operation of the radio access network (RAN) is managed by applications, or apps for short, deployed at intelligent controllers. These apps are selected from a given catalog based on current contextual information. For instance, a scheduling app may be selected on the basis of current traffic and network conditions. Once an app is chosen and run, it is no longer possible to directly test the key performance indicators (KPIs) that would have been obtained with another app. In other words, we can never simultaneously observe both the actual KPI, obtained by the selected app, and the counterfactual KPI, which would have been attained with another app, for the same network condition, making individual-level counterfactual KPIs analysis particularly challenging. This what-if analysis, however, would be valuable to monitor and optimize the network operation, e.g., to identify suboptimal app selection strategies. This paper addresses the problem of estimating the values of KPIs that would have been obtained if a different app had been implemented by the RAN. To this end, we propose a conformal-prediction-based counterfactual analysis method for wireless systems that provides reliable error bars for the estimated KPIs, despite the inherent covariate shift between logged and test data. Experimental results for medium access control-layer apps and for physical-layer apps demonstrate the merits of the proposed method.
Automatic AI Model Selection for Wireless Systems: Online Learning via Digital Twinning
Hou, Qiushuo, Zecchin, Matteo, Park, Sangwoo, Cai, Yunlong, Yu, Guanding, Chowdhury, Kaushik, Simeone, Osvaldo
In modern wireless network architectures, such as O-RAN, artificial intelligence (AI)-based applications are deployed at intelligent controllers to carry out functionalities like scheduling or power control. The AI "apps" are selected on the basis of contextual information such as network conditions, topology, traffic statistics, and design goals. The mapping between context and AI model parameters is ideally done in a zero-shot fashion via an automatic model selection (AMS) mapping that leverages only contextual information without requiring any current data. This paper introduces a general methodology for the online optimization of AMS mappings. Optimizing an AMS mapping is challenging, as it requires exposure to data collected from many different contexts. Therefore, if carried out online, this initial optimization phase would be extremely time consuming. A possible solution is to leverage a digital twin of the physical system to generate synthetic data from multiple simulated contexts. However, given that the simulator at the digital twin is imperfect, a direct use of simulated data for the optimization of the AMS mapping would yield poor performance when tested in the real system. This paper proposes a novel method for the online optimization of AMS mapping that corrects for the bias of the simulator by means of limited real data collected from the physical system. Experimental results for a graph neural network-based power control app demonstrate the significant advantages of the proposed approach.
Rate-Adaptive Coding Mechanism for Semantic Communications With Multi-Modal Data
He, Yangshuo, Yu, Guanding, Cai, Yunlong
Recently, the ever-increasing demand for bandwidth in multi-modal communication systems requires a paradigm shift. Powered by deep learning, semantic communications are applied to multi-modal scenarios to boost communication efficiency and save communication resources. However, the existing end-to-end neural network (NN) based framework without the channel encoder/decoder is incompatible with modern digital communication systems. Moreover, most end-to-end designs are task-specific and require re-design and re-training for new tasks, which limits their applications. In this paper, we propose a distributed multi-modal semantic communication framework incorporating the conventional channel encoder/decoder. We adopt NN-based semantic encoder and decoder to extract correlated semantic information contained in different modalities, including speech, text, and image. Based on the proposed framework, we further establish a general rate-adaptive coding mechanism for various types of multimodal semantic tasks. In particular, we utilize unequal error protection based on semantic importance, which is derived by evaluating the distortion bound of each modality. We further formulate and solve an optimization problem that aims at minimizing inference delay while maintaining inference accuracy for semantic tasks. Numerical results show that the proposed mechanism fares better than both conventional communication and existing semantic communication systems in terms of task performance, inference delay, and deployment complexity. The authors are with the College of Information Science & Electronic Engineering, Zhejiang University, 38 Zheda Road, Hangzhou, China, 310027, email: {sugarhe@zju.edu.cn, I. Introduction Modern communication systems are developed based on the Shannon information theory [1] to recover transmitted messages and use bit rate or bit error rate (BER) as a key performance metric. With the coming era of connected intelligence [2], transmitting the escalated amount of data becomes a huge burden on communication systems.
Deep-Unfolding for Next-Generation Transceivers
Hu, Qiyu, Cai, Yunlong, Zhang, Guangyi, Yu, Guanding, Li, Geoffrey Ye
The stringent performance requirements of future wireless networks, such as ultra-high data rates, extremely high reliability and low latency, are spurring worldwide studies on defining the next-generation multiple-input multiple-output (MIMO) transceivers. For the design of advanced transceivers in wireless communications, optimization approaches often leading to iterative algorithms have achieved great success for MIMO transceivers. However, these algorithms generally require a large number of iterations to converge, which entails considerable computational complexity and often requires fine-tuning of various parameters. With the development of deep learning, approximating the iterative algorithms with deep neural networks (DNNs) can significantly reduce the computational time. However, DNNs typically lead to black-box solvers, which requires amounts of data and extensive training time. To further overcome these challenges, deep-unfolding has emerged which incorporates the benefits of both deep learning and iterative algorithms, by unfolding the iterative algorithm into a layer-wise structure analogous to DNNs. In this article, we first go through the framework of deep-unfolding for transceiver design with matrix parameters and its recent advancements. Then, some endeavors in applying deep-unfolding approaches in next-generation advanced transceiver design are presented. Moreover, some open issues for future research are highlighted.
DDPG-Driven Deep-Unfolding with Adaptive Depth for Channel Estimation with Sparse Bayesian Learning
Hu, Qiyu, Shi, Shuhan, Cai, Yunlong, Yu, Guanding
Deep-unfolding neural networks (NNs) have received great attention since they achieve satisfactory performance with relatively low complexity. Typically, these deep-unfolding NNs are restricted to a fixed-depth for all inputs. However, the optimal number of layers required for convergence changes with different inputs. In this paper, we first develop a framework of deep deterministic policy gradient (DDPG)-driven deep-unfolding with adaptive depth for different inputs, where the trainable parameters of deep-unfolding NN are learned by DDPG, rather than updated by the stochastic gradient descent algorithm directly. Specifically, the optimization variables, trainable parameters, and architecture of deep-unfolding NN are designed as the state, action, and state transition of DDPG, respectively. Then, this framework is employed to deal with the channel estimation problem in massive multiple-input multiple-output systems. Specifically, first of all we formulate the channel estimation problem with an off-grid basis and develop a sparse Bayesian learning (SBL)-based algorithm to solve it. Secondly, the SBL-based algorithm is unfolded into a layer-wise structure with a set of introduced trainable parameters. Thirdly, the proposed DDPG-driven deep-unfolding framework is employed to solve this channel estimation problem based on the unfolded structure of the SBL-based algorithm. To realize adaptive depth, we design the halting score to indicate when to stop, which is a function of the channel reconstruction error. Furthermore, the proposed framework is extended to realize the adaptive depth of the general deep neural networks (DNNs). Simulation results show that the proposed algorithm outperforms the conventional optimization algorithms and DNNs with fixed depth with much reduced number of layers.
Robust Semantic Communications with Masked VQ-VAE Enabled Codebook
Hu, Qiyu, Zhang, Guangyi, Qin, Zhijin, Cai, Yunlong, Yu, Guanding, Li, Geoffrey Ye
Although semantic communications have exhibited satisfactory performance for a large number of tasks, the impact of semantic noise and the robustness of the systems have not been well investigated. Semantic noise refers to the misleading between the intended semantic symbols and received ones, thus cause the failure of tasks. In this paper, we first propose a framework for the robust end-to-end semantic communication systems to combat the semantic noise. In particular, we analyze sample-dependent and sample-independent semantic noise. To combat the semantic noise, the adversarial training with weight perturbation is developed to incorporate the samples with semantic noise in the training dataset. Then, we propose to mask a portion of the input, where the semantic noise appears frequently, and design the masked vector quantized-variational autoencoder (VQ-VAE) with the noise-related masking strategy. We use a discrete codebook shared by the transmitter and the receiver for encoded feature representation. To further improve the system robustness, we develop a feature importance module (FIM) to suppress the noise-related and task-unrelated features. Thus, the transmitter simply needs to transmit the indices of these important task-related features in the codebook. Simulation results show that the proposed method can be applied in many downstream tasks and significantly improve the robustness against semantic noise with remarkable reduction on the transmission overhead.