Goto

Collaborating Authors

 cronus


is formulating a robust, efficient training scheme with extensive results and analysis which is significant enough. "

Neural Information Processing Systems

We thank all reviewers for their time and their valuable feedback. We will add corrections/clarifications as suggested. We would like to emphasize our contribution, as summarized by R5's thorough review: "What's novel about this paper Our proposed FedDF is not the mentioned engineering solution. FL round; thus mutual beneficial information can be shared across architectures . GAN training is not involved in all stages of FL and cannot steal clients' Data generation is done by the (frozen) generator before the FL training by performing inference on random noise.


Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer

Chang, Hongyan, Shejwalkar, Virat, Shokri, Reza, Houmansadr, Amir

arXiv.org Machine Learning

Abstract--Collaborative (federated) learning enables multiple parties to train a global model without sharing their privat e data, but through repeated sharing of the parameters of their loca l models. Despite its advantages, this approach has many known privac y and security weaknesses and performance overhead, in addit ion to being limited only to models with homogeneous architectu res. Besides, federated learning is severely vulnerable to poisoning att acks, where some participants can adversarially influence the agg regate parameters. Large models, with high dimensional parameter vectors, are in particular highly susceptible to privacy an d security attacks: curse of dimensionality in federated learning. We argue that sharing parameters is the most naive way of information exchange in collaborative learning, as they op en all the internal state of the model to inference attacks, and max imize the model's malleability by stealthy poisoning attacks. We propose Cronus, a robust collaborative machine learning framework. The simple yet effective idea behind designing Cronus is to control, unify, and significantly reduce the dim en-sions of the exchanged information between parties, throug h robust knowledge transfer between their black-box local mo dels. We evaluate all existing federated learning algorithms aga inst poisoning attacks, and we show that Cronus is the only secure method, due to its tight robustness guarantee. Treating loc al models as black-box, reduces the information leakage throu gh models, and enables us using existing privacy-preserving a lgo-rithms that mitigate the risk of information leakage throug h the model's output (predictions). Cronus also has a significant ly lower sample complexity, compared to federated learning, which d oes not bind its security to the number of participants. Collaborative machine learning has recently emerged as a promising approach for building machine learning models using distributed training data held by multiple parties. T he training is distributed, and participants repeatedly exch ange information about their local models, through an aggregati on server. The objective is to enable all the participants to converge to a global model, while keeping their data private . This is very attractive to parties who own sensitive data, and agree on performing a common machine learning task, yet are unwilling to pool their data together for centralize d training. V arious applications can substantially benefit f rom collaborative learning. Part of the work was done whe n Virat Shejwalkar was a research intern at NUS.