NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the BeeGFS parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. NetApp's portfolio of NVIDIA-accelerated solutions includes ONTAP AI to eliminate guesswork for faster adoption by using a field-proven reference architecture as well as a preconfigured, integrated solution that is easy to procure and deploy in a turnkey manner.
Academia, hyperscalers and scientific researchers have been big beneficiaries of high performance computing and AI infrastructure. Yet businesses have largely been on the outside looking in. NVIDIA DGX SuperPOD provides businesses a proven design formula for building and running enterprise-grade AI infrastructure with extreme scale. The reference architecture gives businesses a prescription to follow to avoid exhaustive, protracted design and deployment cycles and capital budget overruns. It's available as a consumable solution that now integrates with the leading names in data center IT -- including DDN, IBM, Mellanox and NetApp -- and is fulfilled through a network of qualified resellers.
Nvidia says it is a mission to democratise AI and make it more accessible to every business, and for a price just shy of six figures, it is opening up access to its DGX SuperPod machines, which consist of 20 or more DGX machines. Building off its recent GTC announcements which introduced its Base Command software to control AI workloads, Nvidia said it had hooked up with NetApp to launch its cloud-based Base Command Platform, which will provide access to SuperPods from $90,000 a month later in the northern hemisphere summer. NetApp is providing the flash storage and managing the customers, with Nvidia owning the equipment that is housed in Equinix data centres. "The intent here is that customers can have access to this powerful supercomputer, the superpod just on a rental basis, they can experience it, they can do their work, and from there they can graduate to either acquiring their own superpod or to go and do AI at scale, for example in the public cloud," Nvidia head of enterprise computing Manuvir Das said. "What this does is creates a true hybrid model, where for the customer, it's the same single interface for submitting their jobs and doing all their AI work. That interface can be used for their own superpod equipment that is on-premises or for infrastructure from instances with GPUs in the cloud, but it's the same experience either way."
NVIDIA announced on Monday that its new hosted AI development hub -- the NVIDIA Base Command Platform -- is now available to North American customers after debuting in May. NVIDIA said in a statement that the platform "provides enterprises with instant access to powerful computing infrastructure wherever their data resides." The tool is available to be rented for a monthly subscription price of $90,000. There is a three-month minimum to all subscriptions, NVIDIA explained. Manuvir Das, head of Enterprise Computing at NVIDIA, said the Base Command Platform makes it easy for enterprises to instantly access the power of an NVIDIA DGX SuperPOD to "accelerate the AI and data science development lifecycle."
In this blog series, I've focused on how NetApp can help you streamline your artificial intelligence projects. With technologies and services for managing data everywhere, NetApp is well positioned to solve your AI data challenges. Built on our partnership with NVIDIA and powered by NVIDIA DGX supercomputers and NetApp all-flash storage, ONTAP AI lets you simplify, accelerate, and scale your AI data pipeline to gain deeper understanding in less time. Combining Data Fabric enabled NetApp storage with GPU-accelerated NVIDIA computing systems results in capabilities that aren't available from other turnkey AI solutions, on-premises or in the cloud. Here are five of the key advantages of ONTAP AI.