FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization

Reisizadeh, Amirhossein, Mokhtari, Aryan, Hassani, Hamed, Jadbabaie, Ali, Pedarsani, Ramtin

arXiv.org Machine Learning 

Federated Learning is a novel paradigm that aims to train a statistical model at the "edge" nodes as opposed to the traditional distributed computing systems such as data centers [Konečn y et al., 2016, Li et al., 2019a]. The main objective of federated learning is to fit a model to data generated from network devices without continuous transfer of the massive amount of collected data from edge of the network to back-end servers for processing. Federated learning has been deployed by major technology companies with the goal of providing privacy-preserving services using users' data [Bonawitz et al., 2019]. Examples of such applications are learning from wearable devices [Huang et al., 2018], learning sentiment [Smith et al., 2017], and location-based services [Samarakoon et al., 2018]. While federated learning is a promising paradigm for such applications, there are several challenges that remain to be resolved. In this paper, we focus on two significant challenges of federated learning, and propose a novel federated learning algorithm that addresses the following two challenges: (i) Communication bottleneck. Communication bandwidth is a major bottleneck in federated learning as a large number of devices attempt to communicate their local updates to a central parameter server. Thus, at a high level, for a communication-efficient federated learning algorithm, it is crucial that such updates are sent in a compressed manner and infrequently.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found