Goto

Collaborating Authors

Nvidia breaks records in training and inference for real-time conversational AI – TechCrunch

#artificialintelligence

Nvidia's GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records that have big implications for anyone building on their tech -- which includes companies large and small, as much of the code they've used to achieve these advancements is open source, written in PyTorch and easy to run. The biggest achievements Nvidia announced today include its breaking the hour mark in training BERT, one of the world's most advanced AI language models and a state-of-the-art model widely considered a good standard for natural language processing. Nvidia's AI platform was able to train the model in less than an hour, a record-breaking achievement at just 53 minutes, and the trained model could then successfully infer (i.e. Nvidia's breakthroughs aren't just cause for bragging rights -- these advances scale and provide real-world benefits for anyone working with their NLP conversational AI and GPU hardware. Nvidia achieved its record-setting times for training on one of its SuperPOD systems, which is made up of 92 Nvidia DGX-2H systems runnings 1,472 V100 GPUs, and managed the inference on Nvidia T4 GPUs running Nvidia TensorRT -- which beat the performance of even highly optimized CPUs by many orders of magnitude.


Watch NVIDIA's GeForce RTX launch right here at 12PM ET!

Engadget

By holding a rare solo press conference at Gamescom 2018, NVIDIA is offering a pretty good clue about what it will announce. Thanks to the inevitable leaks, we know it'll likely take the wraps off its latest consumer gaming graphics cards, including the flagship GeForce RTX 2080 Ti. All signs point to Turing-based GPUs with ray-tracing tech (hence RTX rather than GTX) that will make games more realistic -- much like we just saw with its professional Quadro cards. For the 2080 Ti, expect big performance bumps, thanks to the first ever use of GDDR6 memory, along with a beastly 4,352 CUDA cores. You'll reportedly pay around $1,000 for the card and more in power bills, as the 2080 Ti reportedly gulps 285 watts of power.


NVIDIA/nvidia-docker

#artificialintelligence

This repository includes utilities to build and run NVIDIA Docker images. The full documentation is available on the repository wiki. A good place to start is to understand why NVIDIA Docker is needed in the first place. A signed copy of the Contributor License Agreement needs to be provided to digits@nvidia.com before any change can be accepted.


Nvidia's new TensorRT speeds machine learning predictions

#artificialintelligence

Nvidia has released a new version of TensorRT, a runtime system for serving inferences using deep learning models through Nvidia's own GPUs. Inferences, or predictions made from a trained model, can be served from either CPUs or GPUs. Serving inferences from GPUs is part of Nvidia's strategy to get greater adoption of its processors, countering what AMD is doing to break Nvidia's stranglehold on the machine learning GPU market. Nvidia claims the GPU-based TensorRT is better across the board for inferencing than CPU-only approaches. One of Nvidia's proffered benchmarks, the AlexNet image classification test under the Caffe framework, claims TensorRT to be 42 times faster than a CPU-only version of the same test -- 16,041 images per second vs. 374--when run on Nvidia's Tesla P40 processor.


NVIDIA Targets Next AI Frontiers: Inference And China

#artificialintelligence

NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. Much of this business is coming from the largest datacenters in the US, including Amazon, Google, Facebook, IBM, and Microsoft. Recently, NVIDIA announced new technology and customer initiatives at its annual Beijing GTC event to help drive revenue in the inference market for Machine Learning, as well as solidify the company's position in the huge Chinese AI market. For those unfamiliar, inference is where the trained neural network is used to predict and classify sample data. It is likely that the inference market will eventually be larger, in terms of chip unit volumes, than the training market; after all, once you train a neural network, you probably intend to use it and use it a lot.