NVIDIA & ORNL Researchers Train AI Model on World's Top Supercomputer Using 27,600 NVIDIA GPUs
In 2012, Geoffrey Hinton's research team used only two NVIDIA GPUs to train AlexNet, the revolutionary network architecture that handily won the ImageNet Large Scale Visual Recognition Challenge. It probably never occurred to these groundbreaking researchers that just seven years later, a new team of researchers would use almost 10,000 times more GPUs to train their AI model. A research team from NVIDIA, Oak Ridge National Laboratory (ORNL), and Uber has introduced new techniques that enabled them to train a fully convolutional neural network on the world's fastest supercomputer, Summit, with up to 27,600 NVIDIA GPUs. They managed to achieve an impressive, near-linear scaling of 0.93 on distributed training and produce a model capable of atomically-accurate reconstruction of materials -- a longstanding scientific problem involving materials imaging. In June 2018 the US Department of Energy's Oak Ridge National Laboratory in Tennessee unveiled the world's fastest supercomputer Summit, whosecomputing power reaches 200 petaflops.
Oct-5-2019, 05:56:34 GMT
- Country:
- North America > United States > Tennessee (0.26)
- Industry:
- Energy (1.00)
- Government > Regional Government
- Information Technology > Hardware (1.00)
- Technology: