Nvidia Unifies AI, HPC Workloads in Datacenters

#artificialintelligence 

Nvidia's latest cloud server platform is intended as a "building block," in the reference design sense, to support AI training and inference along with HPC workloads such as simulations. The GPU vendor (NASDAQ: NVDA) introduced its latest server platform dubbed HGX-2 on Wednesday (May 30) during a company roadshow in Taipei, Taiwan. Nvidia said the cloud server can be throttled up or down to support precision HPC calculations from 32-bits for single-precision floating point format, or FP32, up to double-precision FP64. Meanwhile, AI training and inference workloads are supported with FP16, or half precision, along with Int8 data. The combination is designed for varying processing requirements for a growing number of enterprise applications that combine AI with HPC, the company noted.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found