Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays. This work considers a distributed training framework where each worker node is allowed to perform local model updates and the resulting models are averaged periodically. We analyze the true speed of error convergence with respect to wall-clock time (instead of the number of iterations), and analyze how it is affected by the frequency of averaging. Stochastic gradient descent (SGD) is the backbone of stateof-the-art supervised learning, which is revolutionizing inference and decision-making in many diverse applications. Classical SGD was designed to be run on a single computing node, and its error-convergence with respect to the number of iterations has been extensively analyzed and improved via accelerated SGD methods. Due to the massive training data-sets and neural network architectures used today, it has became imperative to design distributed SGD implementations, where gradient computation and aggregation is parallelized across multiple worker nodes. Although parallelism boosts the amount of data processed per iteration, it exposes SGD to unpredictable node slowdown and communication delays stemming from variability in the computing infrastructure. Thus, there is a critical need to make distributed SGD fast, yet robust to system variability.
Oct-18-2018
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- New Jersey > Hudson County
- Hoboken (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.14)
- New Jersey > Hudson County
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Technology: