In under a month amid the global pandemic, a small team assembled the world's seventh-fastest computer. Today that mega-system, called Selene, communicates with its operators on Slack, has its own robot attendant and is driving AI forward in automotive, healthcare and natural-language processing. While many supercomputers tap exotic, proprietary designs that take months to commission, Selene is based on an open architecture NVIDIA shares with its customers. The Argonne National Laboratory, outside Chicago, is using a system based on Selene's DGX SuperPOD design to research ways to stop the coronavirus. The University of Florida will use the design to build the fastest AI computer in academia.
"Today that mega-system, called Selene, has its own robot attendant and is driving AI forward in automotive, healthcare and natural-language processing." Assembling supercomputers take years to build. It requires many service personnel working round the clock for many months to deliver a commission. But, beating all odds, NVIDIA claims to have built its supercomputer within three weeks. Not only did NVIDIA assemble a mammoth of a computer in a short time but also have broken records in the recently conducted MLPerf benchmark tests.
Nvidia Corp. said today it managed to build Selene, the world's seventh-fastest supercomputer that's used by the Argonne National Laboratory to research ways to stop the coronavirus, in just under three weeks. The Selene supercomputer has been deployed to tackle problems around concepts such as protein docking and quantum chemistry, which are key to developing an understanding of the coronavirus and a potential cure for the COVID-19 disease. Nvidia said Selene is based on its most advanced DGX SuperPOD architecture, which is a new system developed for artificial intelligence workloads that was announced earlier this year. The DGX SuperPOD incorporates eight of Nvidia's latest A100 graphics processing units, which are designed for data analytics, scientific computing and cloud graphics workloads. Building the Selene supercomputer in such rapid time during the middle of a pandemic was no easy feat, but Nvidia said in a blog post it was able to draw on its earlier experience of piecing together supercomputers based on its older DGX-2 systems.
Only 20 weeks after it was first announced, the Cambridge-1 has already entered its first stages of operation. At the end of last year, Nvidia unveiled plans to start building Cambridge-1, a £40 million device ($51.7 million) that would become the UK's fastest supercomputer. But with a global health crisis still in full swing, Nvidia's team was facing a host of potential challenges; remotely managing the setting up of a supercomputer on the other side of the Atlantic was bound to come with a fair share of unforeseen complications. Yet only 20 weeks after it was first announced, the Cambridge-1 has already entered its first stages of operation – a timeline impressive enough in normal circumstances, let alone in the context of a pandemic. To compare, the majority of supercomputers that are currently on the Top500 list took, on average, a couple of years from concept planning to final build.
The third round of MLPerf training benchmark scores for eight different AI models are out, with rivals Nvidia and Google both staking a claim to the crown. While both companies claimed victory, the results bear further scrutiny. Scores are based on systems, not individual accelerator chips. While Nvidia swept the board for commercially available systems with its Ampere A100-based supercomputer, Google's massive TPU v3 system and smaller TPU v4 systems, which it entered under the research category, makes the search giant a strong contender. Nvida took first place in normalized results for all benchmarks in the commercially available systems category with its A100-based systems.