Exploit Gradient Skewness to Circumvent Byzantine Defenses for Federated Learning
Liu, Yuchen, Chen, Chen, Lyu, Lingjuan, Jin, Yaochu, Chen, Gang
–arXiv.org Artificial Intelligence
Federated Learning (FL) is notorious for its vulnerability to Byzantine attacks. Most current Byzantine defenses share a common inductive bias: among all the gradients, the densely distributed ones are more likely to be honest. However, such a bias is a poison to Byzantine robustness due to a newly discovered phenomenon in this paper - gradient skew. We discover that a group of densely distributed honest gradients skew away from the optimal gradient (the average of honest gradients) due to heterogeneous data. This gradient skew phenomenon allows Byzantine gradients to hide within the densely distributed skewed gradients. As a result, Byzantine defenses are confused into believing that Byzantine gradients are honest. Motivated by this observation, we propose a novel skew-aware attack called STRIKE: first, we search for the skewed gradients; then, we construct Byzantine gradients within the skewed gradients.
arXiv.org Artificial Intelligence
Feb-14-2025
- Country:
- North America > Canada > Ontario (0.28)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology > Security & Privacy (0.93)
- Technology: