Federated Progressive Sparsification (Purge, Merge, Tune)+

Stripelis, Dimitris, Gupta, Umang, Steeg, Greg Ver, Ambite, Jose Luis

arXiv.org Artificial Intelligence 

Federated learning is a promising approach for training machine learning models on decentralized data while keeping data private at each client. Model sparsification seeks to produce small neural models with comparable performance to large models; for example, for deployment on clients with limited memory or computational capabilites. We present FedSparsify, a simple yet effective sparsification strategy for federated training of neural networks based on progressive weight magnitude pruning. FedSparsify learns subnetworks smaller than 10% of the original network size with similar or better accuracy. Through extensive experiments, we demonstrate that FedSparsify results in an average 15-fold model size reduction, 4-fold model inference speedup, and a 3-fold training communication cost improvement across various challenging domains and model architectures. Finally, we also theoretically analyze FedSparsify's impact on the convergence of federated training. Overall, our results show that FedSparsify is an effective method to train extremely sparse and highly accurate models in federated learning settings.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found