Federated $f$-Differential Privacy

Zheng, Qinqing, Chen, Shuxiao, Long, Qi, Su, Weijie J.

arXiv.org Artificial Intelligence 

Unlike traditional distributed training approaches that upload all the data to central servers, federated learning performs ondevice training and only some summaries of local data or local models are exchanged among clients. Typically, the clients upload their local models to the server and share the global averaging in a repeated manner. This offers plausible solutions to address the critical data privacy issue: sensitive information about individuals such as typing history, shopping transactions, geographical locations, medical records, would stay localized. Nonetheless, a malicious client who participates in the federated learning might still be able to learn information about the other clients' data through the shared model's weights. This is because it is possible for an adversary to learn about or even identify certain individuals by simply tweaking the input datasets and probing the output of the algorithm [FJR15, SSSS17]. This gives rise to a pressing call for privacy-preserving federated learning algorithms. Accordingly, we urgently need a rigorous and principled framework to enhance data privacy, and to quantitatively answer the important questions: Can another client identify the presence or absence of any individual record in my data in federated learning? Worse, what if all the other clients ally each other to attack my data?

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found