Goto

Collaborating Authors

 femnist



FederatedHyperparameterTuning: Challenges, Baselines,andConnectionstoWeight-Sharing

Neural Information Processing Systems

Federated learning (FL)isapopular distributedcomputational setting where training isperformed locally or privately [30, 36] and where hyperparameter tuning has been identified as a critical problem[18].




BackFed: An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning

Dao, Thinh, Nguyen, Dung Thuy, Doan, Khoa D, Wong, Kok-Seng

arXiv.org Artificial Intelligence

Research on backdoor attacks in Federated Learning (FL) has accelerated in recent years, with new attacks and defenses continually proposed in an escalating arms race. However, the evaluation of these methods remains neither standardized nor reliable. First, there are severe inconsistencies in the evaluation settings across studies, and many rely on unrealistic threat models. Second, our code review uncovers semantic bugs in the official codebases of several attacks that artificially inflate their reported performance. These issues raise fundamental questions about whether current methods are truly effective or simply overfitted to narrow experimental setups. We introduce \textbf{BackFed}, a benchmark designed to standardize and stress-test FL backdoor evaluation by unifying attacks and defenses under a common evaluation framework that mirrors realistic FL deployments. Our benchmark on three representative datasets with three distinct architectures reveals critical limitations of existing methods. Malicious clients often require excessive training time and computation, making them vulnerable to server-enforced time constraints. Meanwhile, several defenses incur severe accuracy degradation or aggregation overhead. Popular defenses and attacks achieve limited performance in our benchmark, which challenges their previous efficacy claims. We establish BackFed as a rigorous and fair evaluation framework that enables more reliable progress in FL backdoor research.


Byzantine Resilient Federated Multi-Task Representation Learning

Le, Tuan, Moothedath, Shana

arXiv.org Artificial Intelligence

In this paper, we propose BR-MTRL, a Byzantine-resilient multi-task representation learning framework that handles faulty or malicious agents. Our approach leverages representation learning through a shared neural network model, where all clients share fixed layers, except for a client-specific final layer. This structure captures shared features among clients while enabling individual adaptation, making it a promising approach for leveraging client data and computational power in heterogeneous federated settings to learn personalized models. To learn the model, we employ an alternating gradient descent strategy: each client optimizes its local model, updates its final layer, and sends estimates of the shared representation to a central server for aggregation. To defend against Byzantine agents, we employ two robust aggregation methods for client-server communication, Geometric Median and Krum. Our method enables personalized learning while maintaining resilience in distributed settings. We implemented the proposed algorithm in a federated testbed built using Amazon Web Services (AWS) platform and compared its performance with various benchmark algorithms and their variations. Through experiments using real-world datasets, including CIFAR-10 and FEMNIST, we demonstrated the effectiveness and robustness of our approach and its transferability to new unseen clients with limited data, even in the presence of Byzantine adversaries.



FeDABoost: Fairness Aware Federated Learning with Adaptive Boosting

Arachchige, Tharuka Kasthuri, Boeva, Veselka, Abghari, Shahrooz

arXiv.org Artificial Intelligence

This work focuses on improving the performance and fairness of Federated Learning (FL) in non-IID settings by enhancing model aggregation and boosting the training of underperforming clients. We propose FeDABoost, a novel FL framework that integrates a dynamic boosting mechanism and an adaptive gradient aggregation strategy. Inspired by the weighting mechanism of the Multiclass AdaBoost (SAMME) algorithm, our aggregation method assigns higher weights to clients with lower local error rates, thereby promoting more reliable contributions to the global model. In parallel, FeDABoost dynamically boosts under-performing clients by adjusting the focal loss focusing parameter, emphasizing hard-to-classify examples during local training. These mechanisms work together to enhance the global model's fairness by reducing disparities in client performance and encouraging fair participation. We have evaluated FeDABoost on three benchmark datasets: MNIST, FEMNIST, and CIF AR10, and compared its performance with those of FedAvg and Ditto. The results show that FeDABoost achieves improved fairness and competitive performance.