Goto

Collaborating Authors

 pfedme


Personalized Federated Learning with Moreau Envelopes

Neural Information Processing Systems

Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data. One challenge associated with FL is statistical diversity among clients, which restricts the global model from delivering good performance on each client's task. To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. Theoretically, we show that pFedMe convergence rate is state-of-the-art: achieving quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. Experimentally, we verify that pFedMe excels at empirical performance compared with the vanilla FedAvg and Per-FedAvg, a meta-learning based personalized FL algorithm.


A Convergence of lp

Neural Information Processing Systems

The framework is adapted from Dinh et al. [13], with some concrete results specific to our settings. In this subsection, we provide some existing results useful for our later analysis.




A Convergence of lp-proj

Neural Information Processing Systems

The framework is adapted from Dinh et al. [13], with some concrete results specific to our settings. In this subsection, we provide some existing results useful for our later analysis.




Personalized Federated Learning with Moreau Envelopes

Neural Information Processing Systems

Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data. One challenge associated with FL is statistical diversity among clients, which restricts the global model from delivering good performance on each client's task. To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. Theoretically, we show that pFedMe convergence rate is state-of-the-art: achieving quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. Experimentally, we verify that pFedMe excels at empirical performance compared with the vanilla FedAvg and Per-FedAvg, a meta-learning based personalized FL algorithm.


FedMCSA: Personalized Federated Learning via Model Components Self-Attention

Guo, Qi, Qi, Yong, Qi, Saiyu, Wu, Di, Li, Qian

arXiv.org Artificial Intelligence

The standard FL follows three steps: (i) at each iteration, the server distributes the global model to clients; (ii) the client trains the local model on its local private data based on the global model; (iii) the server aggregates local models updated by clients to achieve a new global model, repeated until convergence [1, 4]. FL can ensure effective collaboration between different clients when the data distributions are independent and identically distributed (IID), i.e., private data distributions of clients are similar to each other. However, in many application scenarios, private data of clients may be different in size and class distribution, that is, the data distributions are not independent and identically distributed (Non-IID). In this case, FL may not achieve effective collaboration on different clients due to difference of individual private data [5]. Various algorithms have been proposed to handle the Non-IID data in FL, which can be divided into two categories: average aggregation methods and model-based aggregation methods. As shown in Figure 1(a), average aggregation methods average all local models to generate a global model and distribute it to all clients, where an additional fine-tuning step is performed to train the personalized model in the clients [6, 7, 8, 9].


Loss Tolerant Federated Learning

Zhou, Pengyuan, Fang, Pei, Hui, Pan

arXiv.org Artificial Intelligence

Federated learning has attracted attention in recent years for collaboratively training data on distributed devices with privacy-preservation. The limited network capacity of mobile and IoT devices has been seen as one of the major challenges for cross-device federated learning. Recent solutions have been focusing on threshold-based client selection schemes to guarantee the communication efficiency. However, we find this approach can cause biased client selection and results in deteriorated performance. Moreover, we find that the challenge of network limit may be overstated in some cases and the packet loss is not always harmful. In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization. We use ThrowRightAway (TRA) to accelerate the data uploading for low-bandwidth-devices by intentionally ignoring some packet losses. The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction (10%-30%).