Demystifying the Communication Characteristics for Distributed Transformer Models
Anthony, Quentin, Michalowicz, Benjamin, Hatef, Jacob, Xu, Lang, Abduljabbar, Mustafa, Shafi, Aamir, Subramoni, Hari, Panda, Dhabaleswar
–arXiv.org Artificial Intelligence
Deep learning (DL) models based on the transformer architecture have revolutionized many DL applications such as large language models (LLMs), vision transformers, audio generation, and time series prediction. Much of this progress has been fueled by distributed training, yet distributed communication remains a substantial bottleneck to training progress. This paper examines the communication behavior of transformer models - that is, how different parallelism schemes used in multi-node/multi-GPU DL Training communicate data in the context of transformers. We use GPT-based language models as a case study of the transformer architecture due to their ubiquity. We validate the empirical results obtained from our communication logs using analytical models. At a high level, our analysis reveals a need to optimize small message point-to-point communication further, correlations between sequence length, per-GPU throughput, model size, and optimizations used, and where to potentially guide further optimizations in framework and HPC middleware design and optimization.
arXiv.org Artificial Intelligence
Aug-19-2024
- Country:
- Asia
- Europe > Italy
- Calabria > Catanzaro Province > Catanzaro (0.04)
- North America > United States
- California > Santa Clara County
- Santa Clara (0.04)
- New York > New York County
- New York City (0.04)
- Ohio > Franklin County
- Columbus (0.04)
- Texas (0.04)
- California > Santa Clara County
- Genre:
- Research Report (0.84)
- Technology: