HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
–Neural Information Processing Systems
Data heterogeneity among Federated Learning (FL) users poses a significant challenge, resulting in reduced global model performance. The community has designed various techniques to tackle this issue, among which Knowledge Distillation (KD)-based techniques are common. While these techniques effectively improve performance under high heterogeneity, they inadvertently cause higher accuracy degradation under model poisoning attacks (known as attack amplification). This paper presents a case study to reveal this critical vulnerability in KD-based FL systems. We show why KD causes this issue through empirical evidence and use it as motivation to design a hybrid distillation technique. We introduce a novel algorithm, Hybrid Knowledge Distillation for Robust and Accurate FL (HYDRA-FL), which reduces the impact of attacks in attack scenarios by offloading some of the KD loss to a shallow layer via an auxiliary classifier.
Neural Information Processing Systems
Nov-18-2025, 15:30:38 GMT
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States
- Massachusetts > Hampshire County
- Amherst (0.04)
- Virginia (0.04)
- Massachusetts > Hampshire County
- Canada > Ontario
- North America
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Education (0.93)
- Information Technology > Security & Privacy (1.00)
- Technology: