Strategic Incentivization for Locally Differentially Private Federated Learning

Pagoti, Yashwant Krishna, Sinha, Arunesh, Sural, Shamik

arXiv.org Artificial Intelligence 

--In Federated Learning (FL), multiple clients jointly train a machine learning model by sharing gradient information, instead of raw data, with a server over multiple rounds. T o address the possibility of information leakage in spite of sharing only the gradients, Local Differential Privacy (LDP) is often used. In LDP, clients add a selective amount of noise to the gradients before sending the same to the server . Although such noise addition protects the privacy of clients, it leads to a degradation in global model accuracy. In this paper, we model this privacy-accuracy trade-off as a game, where the sever incentivizes the clients to add a lower degree of noise for achieving higher accuracy, while the clients attempt to preserve their privacy at the cost of a potential loss in accuracy. A token based incentivization mechanism is introduced in which the quantum of tokens credited to a client in an FL round is a function of the degree of perturbation of its gradients. The client can later access a newly updated global model only after acquiring enough tokens, which are to be deducted from its balance. We identify the players, their actions and payoff, and perform a strategic analysis of the game. Extensive experiments were carried out to study the impact of different parameters. Federated Learning (FL) allows multiple clients to train a model by sharing their local gradients with a central server for training over multiple rounds. To further prevent data leakage through different forms of inference attacks on FL [1], use of Local Differential Privacy (LDP) has been proposed [2]. However, LDP-FL faces a critical challenge in ensuring fair participation while attempting to achieve accuracy of the global model and respecting the privacy concerns of individual clients. The clients tend to contribute differently to the model as their degree of participation varies based on a privacy budget and the perceived value of their contributions. Thus, there are two opposing factors affecting the success of an LDP-FL set up. The goal of the server is to achieve high global model accuracy and hence, would prefer the least possible perturbation of gradients done by the clients. The clients, on the other hand, are more inclined to behave in a way that protects their privacy and tend to add more noise to their gradients. However, if all the clients overly perturb their gradients, eventually the accuracy of the global model will suffer, rendering the LDP-FL process ineffective.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found