Nvidia targets neural networks in the datacentre with new benchmark

#artificialintelligence 

Nvidia has announced a series of new benchmarks tracking the performance of tools for running AI inference both at the edge and in the datacentre. The results of the MLPerf Inference 0.5, are the industry's first independent suite of AI benchmarks for inference and help to demonstrate the performance of NVIDIA Turing GPUs for datacentres and NVIDIA Xavier system-on-a-chip for edge computing. Nvidia posted the fastest results on new benchmarks measuring the performance of AI inference workloads in datacentres and at the edge -- building on the company's position in recent benchmarks measuring AI training. 'AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications,' said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. 'AI inference is a tremendous computational challenge.