Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning
Ohib, Riyasat, Thapaliya, Bishal, Dziugaite, Gintare Karolina, Liu, Jingyu, Calhoun, Vince, Plis, Sergey
–arXiv.org Artificial Intelligence
In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are communicated each round between the clients and the server. We validate SSFL's effectiveness using standard non-IID benchmarks, noting marked improvements in the sparsity--accuracy trade-offs. Finally, we deploy our method in a real-world federated learning framework and report improvement in communication time.
arXiv.org Artificial Intelligence
May-14-2024
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Ohio (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report (1.00)
- Industry:
- Technology: