Nvidia accelerates artificial intelligence, analytics with an ecosystem approach

ZDNet

This proclamation, from NVIDIA co-founder, president, and CEO Jensen Huang at the GPU Technology Conference (GTC), held from March 26 to March 29 in San Jose, Calif., only hints at this company's growing impact on state-of-the-art computing. Read also: Nvidia's new supercomputer Clara designed to act as hospital processing hub Nvidia's physical products are accelerators (for third-party hardware) and the company's own GPU-powered workstations and servers. Jensen Huang, co-founder, president, and CEO at Nvidia, presents the sweep of the company's growing AI Platform at GTC 2018 in San Jose, Calif. On the hardware front, the headlines from GTX built on the foundation of Nvidia's graphical processing unit advances. If the "feeds and speeds" stats mean nothing to you, let's put them into the context of real workloads.


#3761c1134bd3

@machinelearnbot

NVIDIA GPUs have been on the forefront of accelerated neural network processing and are the de facto standard for accelerated neural network research and development (R&D) plus deep learning training. At the NVIDIA GPU Technology Conference (GTC) in Beijing China earlier this week, the company maneuvered to also become the de facto standard for accelerated neural network inference deployment. At GTC Beijing, NVIDA lined up the major Chinese cloud companies for AI computing: Alibaba Cloud, Baidu Cloud, and Tencent Cloud. At GTC-Beijing, it announced inference designs with Alibaba Cloud, Tencent, Baidu Cloud, JD.com, and iFlytek.


NVIDIA Enables Era of Interactive Conversational AI with New Inference Software

#artificialintelligence

NVIDIA today introduced groundbreaking inference software that developers everywhere can use to deliver conversational AI applications, slashing inference latency that until now has impeded true, interactive engagement. NVIDIA TensorRT 7 -- the seventh generation of the company's inference software development kit -- opens the door to smarter human-to-AI interactions, enabling real-time engagement with applications such as voice agents, chatbots and recommendation engines. It is estimated that there are 3.25 billion digital voice assistants being used in devices around the world, according to Juniper Research. By 2023, that number is expected to reach 8 billion, more than the world's total population. TensorRT 7 features a new deep learning compiler designed to automatically optimize and accelerate the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications.


Deep Learning Software vs. Hardware: NVIDIA releases TensorRT 7 inference software, Intel acquires Habana Labs ZDNet

#artificialintelligence

In GTC China yesterday, NVIDIA made a series of announcements. Some had to do with local partners and related achievements, such as powering the likes of Alibaba and Baidu. Partners of this magnitude are bound to generate impressive numbers and turn some heads. Another part of the announcements had to do with new hardware. NVIDIA unveiled Orin, a new system-on-a-chip (SoC) designed for autonomous vehicles and robots, as well as a new software-defined platform powered by the SoC, called Nvidia Drive AGX Orin.


Nvidia targets neural networks in the datacentre with new benchmark

#artificialintelligence

Nvidia has announced a series of new benchmarks tracking the performance of tools for running AI inference both at the edge and in the datacentre. The results of the MLPerf Inference 0.5, are the industry's first independent suite of AI benchmarks for inference and help to demonstrate the performance of NVIDIA Turing GPUs for datacentres and NVIDIA Xavier system-on-a-chip for edge computing. Nvidia posted the fastest results on new benchmarks measuring the performance of AI inference workloads in datacentres and at the edge -- building on the company's position in recent benchmarks measuring AI training. 'AI is at a tipping point as it moves swiftly from research to large-scale deployment for real applications,' said Ian Buck, general manager and vice president of Accelerated Computing at NVIDIA. 'AI inference is a tremendous computational challenge.