DROP: Poison Dilution via Knowledge Distillation for Federated Learning
Syros, Georgios, Suri, Anshuman, Koushanfar, Farinaz, Nita-Rotaru, Cristina, Oprea, Alina
–arXiv.org Artificial Intelligence
Federated Learning is vulnerable to adversarial manipulation, where malicious clients can inject poisoned updates to influence the global model's behavior. While existing defense mechanisms have made notable progress, they fail to protect against adversaries that aim to induce targeted backdoors under different learning and attack configurations. To address this limitation, we introduce DROP (Distillation-based Reduction Of Poisoning), a novel defense mechanism that combines clustering and activity-tracking techniques with extraction of benign behavior from clients via knowledge distillation to tackle stealthy adversaries that manipulate low data poisoning rates and diverse malicious client ratios within the federation. Through extensive experimentation, our approach demonstrates superior robustness compared to existing defenses across a wide range of learning configurations. Finally, we evaluate existing defenses and our method under the challenging setting of non-IID client data distribution and highlight the challenges of designing a resilient FL defense in this setting.
arXiv.org Artificial Intelligence
Feb-10-2025
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Government (0.68)
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence > Machine Learning
- Neural Networks (0.93)
- Statistical Learning > Clustering (0.46)
- Data Science > Data Mining (1.00)
- Security & Privacy (1.00)
- Artificial Intelligence > Machine Learning
- Information Technology