DRAGD: A Federated Unlearning Data Reconstruction Attack Based on Gradient Differences
Ju, Bocheng, Fan, Junchao, Liu, Jiaqi, Chang, Xiaolin
–arXiv.org Artificial Intelligence
--Federated learning enables collaborative machine learning while preserving data privacy. However, the rise of federated unlearning, designed to allow clients to erase their data from the gl obal model, introduces new privacy concerns. Specifically, the gradient exchanges during the unlearning process can leak sensitive information about deleted data. In this paper, we introduce DRAGD, a novel attack that exploits gradient discrepancies before and after unlearning to reconstruct forgotten data. We also present DRAGDP, an enhanced version of DRAGD that leverages publicly available prior data to improve reconstruction accuracy, particularly for complex datasets like facial images. Extensive experiments across multiple datasets demonst rate that DRAGD and DRAGDP significantly outperform existing methods in data reconstruction .Our work highlights a critical privacy vulnerability in federa ted unlearning and offers a practical solution, advancing the security of federated unlearning systems in real -world applications. VER recent years, federated learning (FL) [1]-[6] has emerged as a transformative paradigm for collaborative machine learning, enabling multiple clients to jointly train a global model without sharing their raw data. This decentralized approach is increasingly favored for its ability to enhance data privacy and security in sensitive domains such as healthcare, finance, and mobile applications.
arXiv.org Artificial Intelligence
Jul-15-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Middle East > Jordan (0.04)
- China > Beijing
- Europe > United Kingdom
- North America > United States
- California (0.04)
- Asia
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: