If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Oded Green is a Senior Graph Software Engineer at NVIDIA. The NVIDIA DGX-1 supercomputer is a GPU-based platform designed to facilitate faster and more efficient big data sequencing, machine learning, and deep learning processes. There aren't many computers or servers significant enough to be recognizable by model name. But, a DGX-1 is well-known throughout the computing community, particularly by the artificial intelligence and machine learning crowd," said School of Computational Science and Engineering (CSE) Research Technologist Will Powell. The DGX-1 is powered by 8 NVIDIA TESLA V100 GPUs, has over 40,000 CUDA Cores, 5,000 Tensor cores, and 1,000 TFLOPS built specially for deep learning.
When we last covered Nvidia (NVDA) we were taken aback by the quarterly results. From our perspective, we were disappointed that gross margins held up as well as they did and NVDA guided higher for the next quarter in this area. This was definitely not the outcome we were looking for. Also revenues held up better considering what was some rather brutal USD strength. Both these events tempered our shorter-term bearishness.
For a quality conversation between a human and a machine, responses have to be quick, intelligent and natural-sounding. But up to now, developers of language-processing neural networks that power real-time speech applications have faced an unfortunate trade-off: Be quick and you sacrifice the quality of the response; craft an intelligent response and you're too slow. That's because human conversation is incredibly complex. Every statement builds on shared context and previous interactions. From inside jokes to cultural references and wordplay, humans speak in highly nuanced ways without skipping a beat.
There is no better way to learn coding and AI than getting some hands-on practice. You can teach the robot to follow objects, avoid collisions, and a whole lot more with simple tutorials available. It is compatible with TensorFlow, PyTorch, Caffe, and MXNet frameworks. The kit includes a Leopard Imaging 145FOV wide angle camera, EDIMAX WiFi Adapter, SparkFun Micro OLED Breakout, and all the parts you need to get started.
After breaking all the records related to training computer vision models, NVIDIA now claims that it's AI platform is able to train a natural language neural network model based on one of the largest datasets in a record time. It also claims that the inference time is just 2 milliseconds which translates to an extremely fast response from the model participating in a conversation with a user. After computer vision, natural language processing is one of the top applications of AI. From Siri to Alexa to Cortana to Google Assistant, all conversational user experiences are powered by AI. The advancements in AI research is putting the power of language understanding and conversational interface into the hands of developers.
Nvidia CEO Jensen Huang said AI would drive long-term demand because it is the "single most powerful force of our time." Nvidia reported earnings and revenues that beat analysts' expectations as demand for graphics and artificial intelligence chips picked up in the second fiscal quarter. Huang also said his company's near-term growth will come from gaming and a couple of variants of the company's artificial intelligence chip business: inferencing and AI at the edge. During a conference call with analysts, Huang said artificial intelligence is the "single most powerful force of our time" and that there are more than 4,000 AI startups working with the company -- as compared to 2,000 AI startups in April 2017. In an interview with VentureBeat, Huang said the actual number of AI startups Nvidia is tracking is closer to 4,500.
Nvidia says it's achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the company says it has a new language training model in the works that dwarfs existing ones. Nvidia said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2 milliseconds making "it possible for developers to use state-of-the-art language understanding for large-scale applications…." Training: Running the largest version of Bidirectional Encoder Representations from Transformers (BERT-Large) language model, an Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system trained BERT-Large in 2.8 days.
The GPU maker says its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. Nvidia is touting advancements to its artificial intelligence (AI) technology for language understanding that it said sets new performance records for conversational AI. The GPU maker said its AI platform now has the fastest training record, the fastest inference, and largest training model of its kind to date. By adding key optimizations to its AI platform and GPUs, Nvidia is aiming to become the premier provider of conversational AI services, which it says have been limited up to this point due to a broad inability to deploy large AI models in real time. Unlike the much simpler transactional AI, conversational AI uses context and nuance and the responses are instantaneous, explained Nvidia's vice president of applied deep learning research, Bryan Catanzaro, on a press briefing.