Decentralized Training of Foundation Models in Heterogeneous Environments, Jared Quincy Davis
–Neural Information Processing Systems
Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner.
Neural Information Processing Systems
Feb-9-2025, 18:59:03 GMT
- Genre:
- Research Report (1.00)