Goto

Collaborating Authors

 Zhang, Ying-Jun Angela


FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation

arXiv.org Artificial Intelligence

The increasing concern for data privacy has driven the rapid development of federated learning (FL), a privacy-preserving collaborative paradigm. However, the statistical heterogeneity among clients in FL results in inconsistent performance of the server model across various clients. Server model may show favoritism towards certain clients while performing poorly for others, heightening the challenge of fairness. In this paper, we reconsider the inconsistency in client performance distribution and introduce the concept of adversarial multi-armed bandit to optimize the proposed objective with explicit constraints on performance disparities. Practically, we propose a novel multi-armed bandit-based allocation FL algorithm (FedMABA) to mitigate performance unfairness among diverse clients with different data distributions. Extensive experiments, in different Non-I.I.D. scenarios, demonstrate the exceptional performance of FedMABA in enhancing fairness.


Differentially Private Over-the-Air Federated Learning Over MIMO Fading Channels

arXiv.org Artificial Intelligence

--Federated learning (FL) enables edge devices to collaboratively train machine learning models, with model communication replacing direct data uploading. While over-the-air model aggregation improves communication efficiency, up-loading models to an edge server over wireless networks can pose privacy risks. Differential privacy (DP) is a widely used quantitative technique to measure statistical data privacy in FL. Previous research has focused on over-the-air FL with a single-antenna server, leveraging communication noise to enhance user-level DP . This approach achieves the so-called "free DP" by controlling transmit power rather than introducing additional DP-preserving mechanisms at devices, such as adding artificial noise. In this paper, we study differentially private over-the-air FL over a multiple-input multiple-output (MIMO) fading channel. We show that FL model communication with a multiple-antenna server amplifies privacy leakage when the multiple-antenna server employs separate receive combining for model aggregation and information inference. Consequently, relying solely on communication noise, as done in the multiple-input single-output system, cannot meet high privacy requirements, and a device-side privacy-preserving mechanism is necessary for optimal DP design. We analyze the learning convergence and privacy loss of the studied FL system and propose a transceiver design algorithm based on alternating optimization. Numerical results demonstrate that the proposed method achieves a better privacy-learning trade-off compared to prior work. The emergence of artificial intelligence (AI) applications that leverage massive data generated at the edge of wireless networks has attracted widespread interest [2], [3]. Federate learning (FL) is a popular paradigm for exploiting edge devices' data and computation power for distributed machine learning. FL coordinates the distributive training of an AI model on edge devices by periodically sharing model information with an edge server [4]. This work was supported in part by the General Research Fund (project number 14201920, 14202421, 14214122, 14202723), Area of Excellence Scheme grant (project number AoE/E-601/22-R), and NSFC/RGC Collaborative Research Scheme (project number CRS_HKUST603/22), all from the Research Grants Council of Hong Kong. The work of J. Y an was supported in part by the Guangzhou Municiple Science and Technology Project 2023A03J0011. Part of this work was presented at the IEEE Global Communications Conference (GLOBECOM), Kuala Lumpur, Malaysia, December 2023 [1]. He is now with the Department of Electrical and Computer Engineering at Cornell Tech, Cornell University, NY 10044, USA.


Green Edge AI: A Contemporary Survey

arXiv.org Artificial Intelligence

Artificial intelligence (AI) technologies have emerged as pivotal enablers across a multitude of industries, including consumer electronics, healthcare, and manufacturing, largely due to their resurgence over the past decade. The transformative power of AI is primarily derived from the utilization of deep neural networks (DNNs), which require extensive data for training and substantial computational resources for processing. Consequently, DNN models are typically trained and deployed on resource-rich cloud servers. However, due to potential latency issues associated with cloud communications, deep learning (DL) workflows are increasingly being transitioned to wireless edge networks near end-user devices (EUDs). This shift is designed to support latency-sensitive applications and has given rise to a new paradigm of edge AI, which will play a critical role in upcoming 6G networks to support ubiquitous AI applications. Despite its potential, edge AI faces substantial challenges, mostly due to the dichotomy between the resource limitations of wireless edge networks and the resource-intensive nature of DL. Specifically, the acquisition of large-scale data, as well as the training and inference processes of DNNs, can rapidly deplete the battery energy of EUDs. This necessitates an energy-conscious approach to edge AI to ensure both optimal and sustainable performance. In this paper, we present a contemporary survey on green edge AI. We commence by analyzing the principal energy consumption components of edge AI systems to identify the fundamental design principles of green edge AI. Guided by these principles, we then explore energy-efficient design methodologies for the three critical tasks in edge AI systems, including training data acquisition, edge training, and edge inference. Finally, we underscore potential future research directions to further enhance the energy efficiency of edge AI.


CFLIT: Coexisting Federated Learning and Information Transfer

arXiv.org Artificial Intelligence

Future wireless networks are expected to support diverse mobile services, including artificial intelligence (AI) services and ubiquitous data transmissions. Federated learning (FL), as a revolutionary learning approach, enables collaborative AI model training across distributed mobile edge devices. By exploiting the superposition property of multiple-access channels, over-the-air computation allows concurrent model uploading from massive devices over the same radio resources, and thus significantly reduces the communication cost of FL. In this paper, we study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network. We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system. Under this framework, we aim to maximize the IT data rate and guarantee a given FL convergence performance by optimizing the long-term radio resource allocation. A key challenge that limits the spectrum efficiency of the coexisting system lies in the large overhead incurred by frequent communication between the server and edge devices for FL model aggregation. To address the challenge, we rigorously analyze the impact of the computation-to-communication ratio on the convergence of over-the-air FL in wireless fading channels. The analysis reveals the existence of an optimal computation-to-communication ratio that minimizes the amount of radio resources needed for over-the-air FL to converge to a given error tolerance. Based on the analysis, we propose a low-complexity online algorithm to jointly optimize the radio resource allocation for both the FL devices and IT devices. Extensive numerical simulations verify the superior performance of the proposed design for the coexistence of FL and IT devices in wireless cellular systems.


Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified Communication-Learning Design Approach

arXiv.org Artificial Intelligence

To exploit massive amounts of data generated at mobile edge networks, federated learning (FL) has been proposed as an attractive substitute for centralized machine learning (ML). By collaboratively training a shared learning model at edge devices, FL avoids direct data transmission and thus overcomes high communication latency and privacy issues as compared to centralized ML. To improve the communication efficiency in FL model aggregation, over-the-air computation has been introduced to support a large number of simultaneous local model uploading by exploiting the inherent superposition property of wireless channels. However, due to the heterogeneity of communication capacities among edge devices, over-the-air FL suffers from the straggler issue in which the device with the weakest channel acts as a bottleneck of the model aggregation performance. This issue can be alleviated by device selection to some extent, but the latter still suffers from a tradeoff between data exploitation and model communication. In this paper, we leverage the reconfigurable intelligent surface (RIS) technology to relieve the straggler issue in over-the-air FL. Specifically, we develop a learning analysis framework to quantitatively characterize the impact of device selection and model aggregation error on the convergence of over-the-air FL. Then, we formulate a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration. Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches, especially when channel conditions vary dramatically across edge devices. X. Yuan is with the Center for Intelligent Networking and Communications, the University of Electronic Science and Technology of China, Chengdu, China (e-mail: xjyuan@uestc.edu.cn). The availability of massive amounts of data at mobile edge devices has led to a surge of interest in developing artificial intelligence (AI) services, such as image recognition [1] and natural language processing [2], at the edge of wireless networks.


CSIT-Free Federated Edge Learning via Reconfigurable Intelligent Surface

arXiv.org Machine Learning

We study over-the-air model aggregation in federated edge learning (FEEL) systems, where channel state information at the transmitters (CSIT) is assumed to be unavailable. We leverage the reconfigurable intelligent surface (RIS) technology to align the cascaded channel coefficients for CSIT-free model aggregation. We then develop a difference-of-convex algorithm for the resulting non-convex optimization. Numerical experiments on image classification show that the proposed method is able to achieve a similar learning accuracy as the state-of-the-art CSIT-based solution, demonstrating the efficiency of our approach in combating the lack of CSIT. With the explosive increase in the number of connected devices at mobile edge networks, machine learning (ML) over a vast volume of data at edge devices has attracted considerable research attention.


The Roadmap to 6G -- AI Empowered Wireless Networks

arXiv.org Artificial Intelligence

The recent upsurge of diversified mobile applications, especially those supported by Artificial Intelligence (AI), is spurring heated discussions on the future evolution of wireless communications. While 5G is being deployed around the world, efforts from industry and academia have started to look beyond 5G and conceptualize 6G. We envision 6G to undergo an unprecedented transformation that will make it substantially different from the previous generations of wireless cellular systems. In particular, 6G will go beyond mobile Internet and will be required to support ubiquitous AI services from the core to the end devices of the network. Meanwhile, AI will play a critical role in designing and optimizing 6G architectures, protocols, and operations. In this article, we discuss potential technologies for 6G to enable mobile AI applications, as well as AI-enabled methodologies for 6G network design and optimization. Key trends in the evolution to 6G will also be discussed.