Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates
Yue, Kai, Jin, Richeng, Wong, Chau-Wai, Dai, Huaiyu
–arXiv.org Artificial Intelligence
Federated learning (FL) enables decentralized machine learning without sharing raw data, allowing multiple clients to collaboratively learn a global model. However, studies reveal that privacy leakage is possible under commonly adopted FL protocols. In particular, a server with access to client gradients can synthesize data resembling the clients' training data. In this paper, we introduce a novel threat model in FL, named the maliciously curious client, where a client manipulates its own gradients with the goal of inferring private data from peers. This attacker uniquely exploits the strength of a Byzantine adversary, traditionally aimed at undermining model robustness, and repurposes it to facilitate data reconstruction attack. We begin by formally defining this novel client-side threat model and providing a theoretical analysis that demonstrates its ability to achieve significant reconstruction success during FL training. To demonstrate its practical impact, we further develop a reconstruction algorithm that combines gradient inversion with malicious update strategies. Our analysis and experimental results reveal a critical blind spot in FL defenses: both server-side robust aggregation and client-side privacy mechanisms may fail against our proposed attack. Surprisingly, standard server- and client-side defenses designed to enhance robustness or privacy may unintentionally amplify data leakage. Compared to the baseline approach, a mistakenly used defense may instead improve the reconstructed image quality by 10-15%.
arXiv.org Artificial Intelligence
Jun-16-2025
- Country:
- Asia > Nepal (0.04)
- North America > United States
- North Carolina (0.04)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: