Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning

Kim, Joon, Park, Sejin

arXiv.org Artificial Intelligence 

Federated Learning (FL)[1][2] emerged as an artificial intelligence training method that does not require sending data from peripheral devices(clients) to a central server. Rather, each client would download the central model from the server, train it over their private data, and send the resulting gradients of the private training back to the server, all of which are aggregated by a server-side algorithm to produce the next iteration of the central model. Ideally, mutually distrusted clients never communicate their private data, and yet they produce a central model that encompasses the entire clients' data. Extensive research is being conducted on optimizing the learning efficiency of FL on various aspects such as incentive mechanisms[3], communication speed[4], non-IID training[5], and client selection[6]. However, recent research reveals that sending the gradients of private training does not ensure complete data privacy, especially in a wide cross-device environment[7]. Moreover, as a federated system, FL has to protect itself against Byzantine Failure[8], Backdoor injection[9], Model Poisoning[10], and Data Poisoning[11]).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found