NetApp, a global, cloud-led, data-centric software company, announced that NetApp EF600 all-flash NVMe storage combined with the BeeGFS parallel file system is now certified for NVIDIA DGX SuperPOD. The new certification simplifies artificial intelligence (AI) and high-performance computing (HPC) infrastructure to enable faster implementation of these use cases. Since 2018, NetApp and NVIDIA have served hundreds of customers with a range of solutions, from building AI Centers of Excellence to solving massive-scale AI training challenges. The qualification of NetApp EF600 and BeeGFS file system for DGX SuperPOD is the latest addition to a complete set of AI solutions that have been developed by the companies. NetApp's portfolio of NVIDIA-accelerated solutions includes ONTAP AI to eliminate guesswork for faster adoption by using a field-proven reference architecture as well as a preconfigured, integrated solution that is easy to procure and deploy in a turnkey manner.
Visionaries in virtually all industries are looking for ways to apply artificial intelligence (AI) to enable new customer touchpoints, reinvent the customer experience, drive business value--and even change the world. Of the many AI use cases in development, here are a few we're focused on: Unfortunately, many organizations still underestimate how much AI depends on an ability to marshal and manage vast quantities of data. To help put your deep learning projects on a path to achieve real business impact, NetApp is today announcing NetApp ONTAP AI proven architecture. Powered by NVIDIA DGX supercomputers and NetApp all-flash storage, ONTAP AI lets you simplify, accelerate, and scale the data pipeline needed for AI to gain deeper understanding in less time. For example, adding a few sensors to an asthma inhaler opens a huge opportunity to correlate usage and location information among patients.
Academia, hyperscalers and scientific researchers have been big beneficiaries of high performance computing and AI infrastructure. Yet businesses have largely been on the outside looking in. NVIDIA DGX SuperPOD provides businesses a proven design formula for building and running enterprise-grade AI infrastructure with extreme scale. The reference architecture gives businesses a prescription to follow to avoid exhaustive, protracted design and deployment cycles and capital budget overruns. It's available as a consumable solution that now integrates with the leading names in data center IT -- including DDN, IBM, Mellanox and NetApp -- and is fulfilled through a network of qualified resellers.
NetApp and Nvidia have introduced a combined AI reference architecture system to rival the Pure Storage-Nvidia AIRI system. It is aimed at deep learning and, unlike FlexPod (Cisco and NetApp's converge infrastructure), has no brand name. Unlike AIRI, neither does it have its own enclosure. A NetApp and Nvidia technical whitepaper – Scalable AI Infrastructure Designing For Real-World Deep Learning Use Cases (PDF) – defines a reference architecture (RA) for a NetApp A800 all-flash storage array and Nvidia DGX-1 GPU server system. There is a slower and less expensive A700 array-based RA.