Understanding Deep Gradient Leakage via Inversion Influence Functions
–Neural Information Processing Systems
Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors. This attack casts significant privacy challenges on distributed learning from clients with sensitive data, where clients are required to share gradients. Defending against such attacks requires but lacks an understanding of when and how privacy leakage happens, mostly because of the black-box nature of deep networks.
Neural Information Processing Systems
Nov-13-2025, 12:11:06 GMT
- Country:
- Europe > Italy (0.04)
- North America > United States
- Michigan (0.04)
- Pennsylvania (0.04)
- Texas > Travis County
- Austin (0.04)
- South America > Peru
- Lima Department > Lima Province > Lima (0.04)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Health & Medicine (0.93)
- Information Technology > Security & Privacy (1.00)
- Technology: