Goto

Collaborating Authors

Results


How Nvidia built Selene, the world's seventh-fastest computer, in three weeks

ZDNet

Five years ago, Nvidia set out to design a supercomputer-class system powerful enough to train and run its own AI models, such as models for autonomous vehicles, but flexible enough to serve just about any deep-learning researcher. After building multiple iterations of its DGX Pods, Nvidia learned valuable lessons about building a system with modular, scalable units. The COVID-19 outbreak brought new challenges for Nvidia, as it set out to build Selene, the fourth generation of its DGX SuperPODs. Reduced staff and building restrictions complicated their task, but Nvidia managed to go from bare racks in the data center to a fully operational system in just three and-a-half weeks. Selene is now a Top 10 supercomputer, the fastest industrial system in the US and the fastest MLPerf machine that's commercially available.


NVIDIA CEO: AI Workloads Will "Flood" Data Centers Data Center Knowledge

#artificialintelligence

During a keynote at his company's big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker's latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence. NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them. "Machine Learning is one of the most important computer revolutions ever," Huang said. "The number of [research] papers in Deep Learning is just absolutely explosive."