Multi-Objective Optimization for Privacy-Utility Balance in Differentially Private Federated Learning

Ranaweera, Kanishka, Smith, David, Pathirana, Pubudu N., Ding, Ming, Rakotoarivelo, Thierry, Seneviratne, Aruna

arXiv.org Artificial Intelligence 

--Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, making it a promising approach for privacy-preserving machine learning. However, ensuring differential privacy (DP) in FL presents challenges due to the trade-off between model utility and privacy protection. Clipping gradients before aggregation is a common strategy to limit privacy loss, but selecting an optimal clipping norm is non-trivial, as excessively high values compromise privacy, while overly restrictive clipping degrades model performance. In this work, we propose an adaptive clipping mechanism that dynamically adjusts the clipping norm using a multi-objective optimization framework. We theoretically analyze the convergence properties of our method and demonstrate its effectiveness through extensive experiments on MNIST, Fashion-MNIST, and CIF AR-10 datasets. Our results show that adaptive clipping consistently outperforms fixed-clipping baselines, achieving improved accuracy under the same privacy constraints. This work highlights the potential of dynamic clipping strategies to enhance privacy-utility trade-offs in differentially private federated learning. Federated Learning (FL) has emerged as a transformative paradigm for collaborative training of machine learning models without centralized data aggregation [1], [2]. Kanishka Ranaweera is with School of Engineering and Built Environment, Deakin University, Waurn Ponds, VIC 3216, Australia, and also with the Data61, CSIRO, Eveleigh, NSW 2015, Australia. David Smith is with Data61, CSIRO, Eveleigh, NSW 2015, Australia.