A Look Into Training Large Language Models on Next Generation Datacenters
Gherghescu, Alexandru M., Bădoiu, Vlad-Andrei, Agache, Alexandru, Dumitru, Mihai-Valentin, Vasilescu, Iuliu, Mantu, Radu, Raiciu, Costin
–arXiv.org Artificial Intelligence
Is it still worth doing computer networking research? What are relevant problems in this space given the supremacy of hyperscalers in deployed large networks? We take an unconventional approach to finding relevant research directions, by starting from Microsoft's plans to build a $100 billion datacenter for ML. Our goal is to understand what models could be trained in such a datacenter, as well as the high-level challenges one may encounter in doing so. We first examine the constraints imposed by cooling and power requirements for our target datacenter and find that it is infeasible to build in a single location. We use LLM scaling laws to determine that we could train models of 50T or 100T. Finally, we examine how distributed training might work for these models, and what the networking requirements are. We conclude that building the datacenter and training such models is technically possible, but this requires a novel NIC-based multipath transport along with a redesign of the entire training stack, outlining a research agenda for our community in the near future.
arXiv.org Artificial Intelligence
Jul-1-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Romania > București - Ilfov Development Region
- Municipality of Bucharest > Bucharest (0.06)
- United Kingdom (0.04)
- Italy > Calabria
- North America > United States
- California (0.04)
- New York > New York County
- New York City (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Energy > Power Industry (1.00)
- Information Technology (0.66)
- Technology: