NVIDIA Achieves Breakthroughs in Language Understanding to Enable Real-Time Conversational AI

#artificialintelligence

NVIDIA today announced breakthroughs in language understanding that allow businesses to engage more naturally with customers using real-time conversational AI. NVIDIA's AI platform is the first to train one of the most advanced AI language models -- BERT -- in less than an hour and complete AI inference in just over 2 milliseconds. This groundbreaking level of performance makes it possible for developers to use state-of-the-art language understanding for large-scale applications they can make available to hundreds of millions of consumers worldwide. Early adopters of NVIDIA's performance advances include Microsoft and some of the world's most innovative startups, which are harnessing NVIDIA's platform to develop highly intuitive, immediately responsive language-based services for their customers. Limited conversational AI services have existed for several years.


Talk to Me: Nvidia Claims NLP Inference, Training Records

#artificialintelligence

Nvidia says it's achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the company says it has a new language training model in the works that dwarfs existing ones. Nvidia said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2 milliseconds making "it possible for developers to use state-of-the-art language understanding for large-scale applications…." Training: Running the largest version of Bidirectional Encoder Representations from Transformers (BERT-Large) language model, an Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system trained BERT-Large in 2.8 days.


NVIDIA's AI advance: Natural language processing gets faster and better all the time ZDNet

#artificialintelligence

When NVIDIA announced breakthroughs in language understanding to enable real-time conversational AI, we were caught off guard. We were still trying to digest the proceedings of ACL, one of the biggest research events for computational linguistics worldwide, in which Facebook, Salesforce, Microsoft and Amazon were all present. While these represent two different sets of achievements, they are still closely connected. Here is what NVIDIA's breakthrough is about, and what it means for the world at large. As ZDNet reported yesterday, NVIDIA says its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date.


NVIDIA AI Platform Takes Conversational User Experience To A New Level

#artificialintelligence

After breaking all the records related to training computer vision models, NVIDIA now claims that it's AI platform is able to train a natural language neural network model based on one of the largest datasets in a record time. It also claims that the inference time is just 2 milliseconds which translates to an extremely fast response from the model participating in a conversation with a user. After computer vision, natural language processing is one of the top applications of AI. From Siri to Alexa to Cortana to Google Assistant, all conversational user experiences are powered by AI. The advancements in AI research is putting the power of language understanding and conversational interface into the hands of developers.


Nvidia breaks records in training and inference for real-time conversational AI – TechCrunch

#artificialintelligence

Nvidia's GPU-powered platform for developing and running conversational AI that understands and responds to natural language requests has achieved some key milestones and broken some records that have big implications for anyone building on their tech -- which includes companies large and small, as much of the code they've used to achieve these advancements is open source, written in PyTorch and easy to run. The biggest achievements Nvidia announced today include its breaking the hour mark in training BERT, one of the world's most advanced AI language models and a state-of-the-art model widely considered a good standard for natural language processing. Nvidia's AI platform was able to train the model in less than an hour, a record-breaking achievement at just 53 minutes, and the trained model could then successfully infer (i.e. Nvidia's breakthroughs aren't just cause for bragging rights -- these advances scale and provide real-world benefits for anyone working with their NLP conversational AI and GPU hardware. Nvidia achieved its record-setting times for training on one of its SuperPOD systems, which is made up of 92 Nvidia DGX-2H systems runnings 1,472 V100 GPUs, and managed the inference on Nvidia T4 GPUs running Nvidia TensorRT -- which beat the performance of even highly optimized CPUs by many orders of magnitude.