Goto

Collaborating Authors

 tighter analysis and adaptive synchronization


Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Neural Information Processing Systems

Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms. In this paper, we study local distributed SGD, where data is partitioned among computation nodes, and the computation nodes perform local updates with periodically exchanging the model among the workers to perform averaging. While local SGD is empirically shown to provide promising results, a theoretical understanding of its performance remains open. In this paper, we strengthen convergence analysis for local SGD, and show that local SGD can be far less expensive and applied far more generally than current theory suggests. Specifically, we show that for loss functions that satisfy the Polyak-Kojasiewicz condition, $O((pT)^{1/3})$ rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker. This is in contrast with previous work which required higher number of communication rounds, as well as was limited to strongly convex loss functions, for a similar asymptotic performance. We also develop an adaptive synchronization scheme that provides a general condition for linear speed up.

  local sgd, name change, tighter analysis and adaptive synchronization, (3 more...)

Reviews: Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Neural Information Processing Systems

The paper tightens the analysis of local SGD that periodically averages the models at different nodes. It improves the bound on the communication rounds suffice to achieve linear speedups and relaxes some of the assumptions used in previous works. The authors also develop an adaptive scheme to choose the communication rounds based on the intuition from their theoretical results. They also provided empirical results on logistic regression problem with epsilon dataset to support the theoretical results. Comments: The paper is well written and east to follow.


Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Neural Information Processing Systems

Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms. In this paper, we study local distributed SGD, where data is partitioned among computation nodes, and the computation nodes perform local updates with periodically exchanging the model among the workers to perform averaging. While local SGD is empirically shown to provide promising results, a theoretical understanding of its performance remains open. In this paper, we strengthen convergence analysis for local SGD, and show that local SGD can be far less expensive and applied far more generally than current theory suggests. Specifically, we show that for loss functions that satisfy the Polyak-Kojasiewicz condition, O((pT) {1/3}) rounds of communication suffice to achieve a linear speed up, that is, an error of O(1/pT), where T is the total number of model updates at each worker.

  local sgd, sgd, tighter analysis and adaptive synchronization, (2 more...)

Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Haddadpour, Farzin, Kamani, Mohammad Mahdi, Mahdavi, Mehrdad, Cadambe, Viveck

Neural Information Processing Systems

Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms. In this paper, we study local distributed SGD, where data is partitioned among computation nodes, and the computation nodes perform local updates with periodically exchanging the model among the workers to perform averaging. While local SGD is empirically shown to provide promising results, a theoretical understanding of its performance remains open. In this paper, we strengthen convergence analysis for local SGD, and show that local SGD can be far less expensive and applied far more generally than current theory suggests. Specifically, we show that for loss functions that satisfy the Polyak-Kojasiewicz condition, $O((pT) {1/3})$ rounds of communication suffice to achieve a linear speed up, that is, an error of $O(1/pT)$, where $T$ is the total number of model updates at each worker.

  artificial intelligence, sgd, tighter analysis and adaptive synchronization, (3 more...)