Heterogeneity-Oblivious Robust Federated Learning

Zhang, Weiyao, Li, Jinyang, Song, Qi, Wang, Miao, Lin, Chungang, Luo, Haitong, Meng, Xuying, Zhang, Yujun

arXiv.org Artificial Intelligence 

--Federated Learning (FL) remains highly vulnerable to poisoning attacks, especially under real-world hyper-heterogeneity, where clients differ significantly in data distributions, communication capabilities, and model architectures. Such heterogeneity not only undermines the effectiveness of aggregation strategies but also makes attacks more difficult to detect. Furthermore, high-dimensional models expand the attack surface. T o address these challenges, we propose Horus, a heterogeneity-oblivious robust FL framework centered on low-rank adaptations (LoRAs). Rather than aggregating full model parameters, Horus inserts LoRAs into empirically stable layers and aggregates only LoRAs to reduce the attack surface. We uncover a key empirical observation that the input projection (LoRA-A) is markedly more stable than the output projection (LoRA-B) under heterogeneity and poisoning. Leveraging this, we design a Heterogeneity-Oblivious Poisoning Score using the features from LoRA-A to filter poisoned clients. For the remaining benign clients, we propose projection-aware aggregation mechanism to preserve collaborative signals while suppressing drifts, which reweights client updates by consistency with the global directions. Extensive experiments across diverse datasets, model architectures, and attacks demonstrate that Horus consistently outperforms state-of-the-art baselines in both robustness and accuracy. Federated Learning (FL) has gained significant traction as a privacy-preserving paradigm for distributed training, enabling clients to collaboratively learn a global model without sharing their raw data [12], [20]. However, the decentralized nature of FL inherently introduces serious security vulnerabilities, making it susceptible to poisoning attacks, in which attackers inject malicious data or local updates. Such attacks pose a particularly insidious threat, as they can stealthily degrade or manipulate the global model over time [29]. For example, perturbing a federated model deployed in vehicular systems could autonomously start the vehicle or execute an emergency brake, thereby endangering human lives and compromising property safety [24].