c4ede56bbd98819ae6112b20ac6bf145-AuthorFeedback.pdf

Neural Information Processing Systems 

Author Response for: "Inverting Gradients - How easy is it to break privacy in federated learning" General Comments: We thank all reviewers for their valuable feedback and interest in this attack. Some questions arose about the theoretical analysis for fully connected layers. Finally knowledge of the feature representation already enables attacks like Melis et al. This non-uniformity is a significant result for the privacy of gradient batches. Fig.4 of [35] looks better because the attack scenario there is easier.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found