GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning

He, Shiqi, Yan, Qifan, Wu, Feijie, Wang, Lanjun, Lécuyer, Mathias, Beschastnikh, Ivan

arXiv.org Artificial Intelligence 

Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge devices have limited network bandwidth. Existing work to optimize FL bandwidth overlooks downstream transmission and does not account for FL client sampling. In this paper we propose GlueFL, a framework that incorporates new client sampling and model compression algorithms to mitigate low download bandwidths of FL clients. GlueFL prioritizes recently used clients and bounds the number of changed positions in compression masks in each round. Across three popular FL datasets and three state-of-the-art strategies, GlueFL reduces downstream client bandwidth by 27% on average and reduces training time by 29% on average. One important strategy is client sampling, which limits the number of clients that perform training in Federated learning (FL) moves machine learning (ML) training each round (McMahan et al., 2017; Luo et al., 2022). In FL, edge clients communicate with a sampling reduces both upstream and downstream bandwidth.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found