Random Gradient Masking as a Defensive Measure to Deep Leakage in Federated Learning
–arXiv.org Artificial Intelligence
Federated Learning (FL)[1][2] emerged as an artificial intelligence training method that does not require sending data from peripheral devices(clients) to a central server. Rather, each client would download the central model from the server, train it over their private data, and send the resulting gradients of the private training back to the server, all of which are aggregated by a server-side algorithm to produce the next iteration of the central model. Ideally, mutually distrusted clients never communicate their private data, and yet they produce a central model that encompasses the entire clients' data. Extensive research is being conducted on optimizing the learning efficiency of FL on various aspects such as incentive mechanisms[3], communication speed[4], non-IID training[5], and client selection[6]. However, recent research reveals that sending the gradients of private training does not ensure complete data privacy, especially in a wide cross-device environment[7]. Moreover, as a federated system, FL has to protect itself against Byzantine Failure[8], Backdoor injection[9], Model Poisoning[10], and Data Poisoning[11]).
arXiv.org Artificial Intelligence
Aug-15-2024
- Country:
- Asia > South Korea (0.04)
- North America > United States (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: