Enhancing Convergence, Privacy and Fairness for Wireless Personalized Federated Learning: Quantization-Assisted Min-Max Fair Scheduling

Zhao, Xiyu, Cui, Qimei, Du, Ziqiang, Li, Weicai, Yu, Xi, Ni, Wei, Zhang, Ji, Tao, Xiaofeng, Zhang, Ping

arXiv.org Artificial Intelligence 

--Personalized federated learning (PFL) offers a solution to balancing personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). Little attention has been given to wireless PFL (WPFL), where privacy concerns arise. Performance fairness of PL models is another challenge resulting from communication bottlenecks in WPFL. This paper exploits quantization errors to enhance the privacy of WPFL and proposes a novel quantization-assisted Gaussian differential privacy (DP) mechanism. We analyze the convergence upper bounds of individual PL models by considering the impact of the mechanism (i.e., quantization errors and Gaussian DP noises) and imperfect communication channels on the FL of WPFL. This is achieved by revealing the nested structure of this problem to decouple it into subproblems solved sequentially for the client selection, channel allocation, and power control, and for the learning rates and PL-FL weighting coefficients. Experiments validate our analysis and demonstrate that our approach substantially outperforms alternative scheduling strategies by 87. Personalized federated learning (PFL) has been recently proposed to account for both generalization and personal-ization. It can strike a balance between personalized models and the global model, e.g., via a global-regularized multi-task framework [1]. Manuscript received 28 October 2024; revised 18 December 2024; accepted 22 April 2025. This work was supported by the National Key Research and Development Program of China under Grant No. 2020YFB1806804, and the Beijing Natural Science Foundation Program under Grand No.L232002.