Understanding Deep Gradient Leakage via Inversion Influence Functions

Neural Information Processing Systems 

Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors. This attack casts significant privacy challenges on distributed learning from clients with sensitive data, where clients are required to share gradients. Defending against such attacks requires but lacks an understanding of when and how privacy leakage happens, mostly because of the black-box nature of deep networks.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found