GuardFed: A Trustworthy Federated Learning Framework Against Dual-Facet Attacks
Li, Yanli, Zhou, Yanan, Guo, Zhongliang, Yang, Nan, Zhang, Yuning, Chen, Huaming, Yuan, Dong, Ding, Weiping, Pedrycz, Witold
–arXiv.org Artificial Intelligence
Abstract--Federated learning (FL) enables privacy-preserving collaborative model training but remains vulnerable to adversarial behaviors that compromise model utility or fairness across sensitive groups. While extensive studies have examined attacks targeting either objective, strategies that simultaneously degrade both utility and fairness remain largely unexplored. T o bridge this gap, we introduce the Dual-Facet Attack (DF A), a novel threat model that concurrently undermines predictive accuracy and group fairness. Two variants, Synchronous DF A (S-DF A) and Split DF A (Sp-DF A), are further proposed to capture distinct real-world collusion scenarios. Experimental results show that existing robust FL defenses, including hybrid aggregation schemes, fail to resist DF As effectively. T o counter these threats, we propose GuardFed, a self-adaptive defense framework that maintains a fairness-aware reference model using a small amount of clean server data augmented with synthetic samples. In each training round, GuardFed computes a dual-perspective trust score for every client by jointly evaluating its utility deviation and fairness degradation, thereby enabling selective aggregation of trustworthy updates. Extensive experiments on real-world datasets demonstrate that GuardFed consistently preserves both accuracy and fairness under diverse non-IID and adversarial conditions, achieving state-of-the-art performance compared with existing robust FL methods. The rapid advancement of deep learning (DL) has greatly accelerated the deployment of intelligent automation systems [1], providing smart services across diverse application domains. Alongside this evolution, there is an increasing emphasis on human-centered values such as privacy, fairness, and security, which extend beyond traditional performance-oriented objectives. Y anli Li is with the School of Artificial Intelligence and Computer Science, Nantong University, Nantong, 226019, China, and also with the School of Electrical and Computer Engineering, The University of Sydney, Sydney, 2006, Australia (e-mail: yanli.li@sydney.edu.au).
arXiv.org Artificial Intelligence
Nov-13-2025
- Country:
- Asia
- Europe > United Kingdom
- Scotland > Fife > St. Andrews (0.04)
- North America > Canada
- Oceania > Australia (0.24)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: