Collaborating Authors

Nvidia launches Indian virtual incubator for AI


India: American technology major Nvidia has launched the Nvidia Inception programme in India, in recognition of the country's budding innovation ecosystem in Artificial Intelligence (AI). Inception is a virtual incubator programme to support startups with revolutionary ideas in AI. Members will receive a custom set of benefits, from hardware grants and marketing support to training with deep learning experts. The Inception Programme was launched in India at the inaugural Nvidia Emerging Companies Summit India, part of the GPU Technology Conference (GTCx), a platform for the brightest minds and greatest ideas in GPU computing. The momentum around AI among Indian innovators is so significant that, at launch, the Inception Programme already has close to 100 Indian startups as members.

StradVision Joins NVIDIA Inception Program as Premier Partner


StradVision has joined NVIDIA Inception, a virtual accelerator program designed to nurture companies that are revolutionizing industries with advancements in AI and data sciences. Distinguishing itself as a collaborator of choice from among other AI companies, StradVision has also been selected as one of the program's Premier Partners, an exclusive group within NVIDIA Inception's global network of over 6,000 startups. StradVision specializes in AI-based vision processing technology for Advanced Driver-Assistance Systems (ADAS) and Autonomous Vehicles (AVs) via SVNet, their flagship product. It is a lightweight embedded software that allows vehicles to detect and identify objects on the road accurately, even in harsh weather conditions or poor lighting. Thanks to StradVision's patented Deep Neural Network-enabled technology, SVNet can be optimized for any hardware system.

Nvidia reaches out to VCs to fund and build the AI ecosystem


Nvidia has come across more than 1,300 AI startups in its Inception program, but the graphics and AI chip maker believes that there aren't enough artificial intelligence startups out there. Jeff Herbst, vice president of business development, said that Nvidia wants to partner with venture capitalists to fund even more AI startups that will build out the necessary parts of the ecosystem for AI, which is expected to become a huge part of the technology landscape going forward. He said the doors are open to partnering because there are so many more startups than Nvidia can help or finance on its own. Nvidia held a lunch on Thursday at its GPU Technology Conference to enlist dozens of venture capitalists in its crusade to change the world through artificial intelligence. Five startups made presentations to the VCs during the lunch, and Herbst spoke with Jim McHugh, general manager of Nvidia's deep learning group, about the trends that will create opportunities for AI startups in the future.

Nvidia Inception highlights 4 AI startups for enterprise applications


Nvidia is tracking more than 2,800 companies through its Inception program, which was created to identify the best artificial intelligence startups. It does so to find investment opportunities, but the world's biggest standalone maker of graphics processing units (GPUs) also knows that these startups will use its technology for AI computing. I headed out to Nvidia's new headquarters in Santa Clara, California, last week to watch pitches from 12 companies in a Shark Tank-styled judging event. The 12 presenters were chosen from 200 applicants and are vying for a $1 million prize pool. Nvidia CEO Jensen Huang introduced a panel of four judges and said he was glad that he didn't have to go through this kind of process when he cofounded Nvidia in 1993.

Talk to Me: Nvidia Claims NLP Inference, Training Records


Nvidia says it's achieved significant advances in conversation natural language processing (NLP) training and inference, enabling more complex, immediate-response interchanges between customers and chatbots. And the company says it has a new language training model in the works that dwarfs existing ones. Nvidia said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2 milliseconds making "it possible for developers to use state-of-the-art language understanding for large-scale applications…." Training: Running the largest version of Bidirectional Encoder Representations from Transformers (BERT-Large) language model, an Nvidia DGX SuperPOD with 92 Nvidia DGX-2H systems running 1,472 V100 GPUs cut training from several days to 53 minutes. A single DGX-2 system trained BERT-Large in 2.8 days.