FeDABoost: Fairness Aware Federated Learning with Adaptive Boosting
Arachchige, Tharuka Kasthuri, Boeva, Veselka, Abghari, Shahrooz
–arXiv.org Artificial Intelligence
This work focuses on improving the performance and fairness of Federated Learning (FL) in non-IID settings by enhancing model aggregation and boosting the training of underperforming clients. We propose FeDABoost, a novel FL framework that integrates a dynamic boosting mechanism and an adaptive gradient aggregation strategy. Inspired by the weighting mechanism of the Multiclass AdaBoost (SAMME) algorithm, our aggregation method assigns higher weights to clients with lower local error rates, thereby promoting more reliable contributions to the global model. In parallel, FeDABoost dynamically boosts under-performing clients by adjusting the focal loss focusing parameter, emphasizing hard-to-classify examples during local training. These mechanisms work together to enhance the global model's fairness by reducing disparities in client performance and encouraging fair participation. We have evaluated FeDABoost on three benchmark datasets: MNIST, FEMNIST, and CIF AR10, and compared its performance with those of FedAvg and Ditto. The results show that FeDABoost achieves improved fairness and competitive performance.
arXiv.org Artificial Intelligence
Oct-6-2025
- Country:
- Europe > Sweden (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- New York (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- Canada > Ontario
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Education (0.34)
- Technology: