Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging

Li, Shigang, Ben-Nun, Tal, Nadiradze, Giorgi, Di Girolamo, Salvatore, Dryden, Nikoli, Alistarh, Dan, Hoefler, Torsten

arXiv.org Artificial Intelligence 

Abstract--Deep learning at scale is dominated by communication time. Distributing samples across nodes usually yields the best performance, but poses scaling challenges due to global information dissemination and load imbalance across uneven sample lengths. State-of-the-art decentralized optimizers mitigate the problem, but require more iterations to achieve the same accuracy as their globally-communicating counterparts. We present Wait-Avoiding Group Model Averaging (WAGMA) SGD, a wait-avoiding stochastic optimizer that reduces global communication via subgroup weight exchange. The key insight is a combination of algorithmic changes to the averaging scheme and the use of a group allreduce operation. We prove the convergence of WAGMA-SGD, and empirically show that it retains convergence rates similar to Allreduce-SGD. For evaluation, we train ResNet-50 on ImageNet; T ransformer for machine translation; and deep reinforcement learning for navigation at scale. Compared with state-of-the-art decentralized SGD variants, WAGMA-SGD significantly improves training throughput (e.g., 2.1 Index Terms --stochastic gradient descent, distributed deep learning, decentralized optimization. The introduction of deep learning is one of the most important advancements in science over the past two decades, powering industries from autonomous driving [1] to drug discovery [2]. With the rise of deep neural networks, their training evolved into a computationally-intensive task that consumes as many resources as modern complex high-performance computing problems [3]. As a result, an abundance of research has been conducted into its scaling and distribution [4]. The leading contenders for largest workloads in deep learning are Neural Language Models [5], [6], Deep Reinforcement Learning (RL) [7], [8] and Neural Architecture Search [9]. In these regimes, computation time is measured in thousands of "GPU days", with some utilizing hundreds of accelerators (GPUs, TPUs) for several weeks [7], [10], [11].