CO2: Efficient Distributed Training with Full Communication-Computation Overlap

Sun, Weigao, Qin, Zhen, Sun, Weixuan, Li, Shidi, Li, Dong, Shen, Xuyang, Qiao, Yu, Zhong, Yiran

arXiv.org Artificial Intelligence 

The fundamental success of large language models hinges upon the efficacious implementation of large-scale distributed training techniques. Nevertheless, building a vast, high-performance cluster featuring high-speed communication interconnectivity is prohibitively costly, and accessible only to prominent entities. In this work, we aim to lower this barrier and democratize large-scale training with limited bandwidth clusters. We propose a new approach called CO2 that introduces localupdating and asynchronous communication to the distributed data-parallel training, thereby facilitating the full overlap of COmunication with COmputation. CO2 is able to attain a high scalability even on extensive multi-node clusters constrained by very limited communication bandwidth. We further propose the staleness gap penalty and outer momentum clipping techniques together with CO2 to bolster its convergence and training stability. Besides, CO2 exhibits seamless integration with well-established ZeRO-series optimizers which mitigate memory consumption of model states with large model training. We also provide a mathematical proof of convergence, accompanied by the establishment of a stringent upper bound. These experiments serve to demonstrate the capabilities of CO2 in terms of convergence, generalization, and scalability when deployed across configurations comprising up to 128 A100 GPUs. The outcomes emphasize the outstanding capacity of CO2 to hugely improve scalability, no matter on clusters with 800Gbps RDMA or 80Gbps TCP/IP inter-node connections. Distributed optimization is crucial for the efficient training of large-scale deep neural networks. Mini-batch parallel optimization methods (Goyal et al., 2017; Li et al., 2014) like stochastic gradient decent (SGD) with distributed data parallel (DDP) paradigm are commonly used, but communication overhead can pose significant challenges when scaling out to larger GPU clusters. Existing techniques leverage gradient bucketing to partially overlap communication with backward computation to enhance training efficiency, but residual overhead remains a challenge in scenarios with large model sizes and limited inter-node communication bandwidth. Various strategies have been proposed to address the communication-related issues.