Fay, Dominik
Locally Differentially Private Online Federated Learning With Correlated Noise
Zhang, Jiaojiao, Zhu, Linglingzhi, Fay, Dominik, Johansson, Mikael
We introduce a locally differentially private (LDP) algorithm for online federated learning that employs temporally correlated noise to improve utility while preserving privacy. To address challenges posed by the correlated noise and local updates with streaming non-IID data, we develop a perturbed iterate analysis that controls the impact of the noise on the utility. Moreover, we demonstrate how the drift errors from local updates can be effectively managed for several classes of nonconvex loss functions. Subject to an $(\epsilon,\delta)$-LDP budget, we establish a dynamic regret bound that quantifies the impact of key parameters and the intensity of changes in the dynamic environment on the learning performance. Numerical experiments confirm the efficacy of the proposed algorithm.
Dynamic Privacy Allocation for Locally Differentially Private Federated Learning with Composite Objectives
Zhang, Jiaojiao, Fay, Dominik, Johansson, Mikael
The design This paper proposes a locally differentially private federated of DP algorithms in federated learning depends on the attack learning algorithm for strongly convex but possibly nonsmooth scenario and can be roughly divided into global DP and problems that protects the gradients of each worker local DP (LDP) [6, 7]. Global DP resists passive attackers against an honest but curious server. The proposed algorithm from outside the system and typically relies on the server to adds artificial noise to the shared information to ensure add noise to the aggregated information while the workers privacy and dynamically allocates the time-varying noise upload their true models or gradients, assuming that the upload variance to minimize an upper bound of the optimization communication channel is secure and the server is trustworthy error subject to a predefined privacy budget constraint.
Privacy Amplification via Importance Sampling
Fay, Dominik, Mair, Sebastian, Sjölund, Jens
We examine the privacy-enhancing properties of subsampling a data set via importance sampling as a pre-processing step for differentially private mechanisms. This extends the established privacy amplification by subsampling result to importance sampling where each data point is weighted by the reciprocal of its selection probability. The implications for privacy of weighting each point are not obvious. On the one hand, a lower selection probability leads to a stronger privacy amplification. On the other hand, the higher the weight, the stronger the influence of the point on the output of the mechanism in the event that the point does get selected. We provide a general result that quantifies the trade-off between these two effects. We show that heterogeneous sampling probabilities can lead to both stronger privacy and better utility than uniform subsampling while retaining the subsample size. In particular, we formulate and solve the problem of privacy-optimal sampling, that is, finding the importance weights that minimize the expected subset size subject to a given privacy budget. Empirically, we evaluate the privacy, efficiency, and accuracy of importance sampling-based privacy amplification on the example of k-means clustering.