Collaborating Authors

Nvidia creates the world's first video game demo using AI generated graphics – Fanatical Futurist by International Keynote Speaker Matthew Griffin


Connect, download a free E-Book, watch a keynote, or browse my blog. The recent boom in Artificial Intelligence (AI) has led to the emergence of an entirely new field of research dedicated to using AI's, or Creative Machines as they're also known, to create synthetic content – or in laymans terms create "fake" digital content without the involvement of humans that includes everything from audio tracks, and imagery, to videos. As part of this new trend I recently reported how Promethean AI was using its AI to help people create game environments just by talking and describing what they wanted, and how Nvidia had created an AI that could take in a real video feed, from a city for example, and transform it in real time into digital content that could be used to create game environments as well as VR worlds. And at the time both of these were huge breakthroughs in the field. Now, in their latest research the same team behind the original Nvidia breakthrough have published research showing how AI generated video and visuals can be combined with a traditional video game engine "to create a hybrid graphics system" that could one day be used in video games, movies, and virtual reality.

NVIDIA's new AI turns videos of the real world into virtual landscapes


Attendees of this year's NeurIPS AI conference in Montreal can spend a few moments driving through a virtual city, courtesy of NVIDIA. While that normally wouldn't be much to get worked up over, the simulation is fascinating because of what made it possible. With the help of some clever machine learning techniques and a handy supercomputer, NVIDIA has cooked up a way for AI to chew on existing videos and use the objects and scenery found within them to build interactive environments. NVIDIA's research here isn't just a significant technical achievement; it also stands to make it easier for artists and developers to craft lifelike virtual worlds. Instead of having to meticulously design objects and people to fill a space polygon by polygon, they can use existing machine learning tools to roughly define those entities and let NVIDIA's neural network fill in the rest.

CES 2019: Nvidia CEO Huang explains how AI changes everything


Nvidia's chief executive, Jensen Huang, took to the stage of the ballroom at the MGM Grand hotel in Las Vegas on Sunday night, the opening night of the Consumer Electronics Show, to tell those assembled that AI, especially deep learning, is fundamentally changing his company's business of creating lifelike computer graphics. The traditional graphics pipeline is yielding to neural network approaches, accelerated by newer on-chip circuitry, so that physics simulation and sampling of real-world details are taking over from the traditional practice of painting polygons on the screen to simulate objects and their environment. Huang pointed to how primitive a lot of graphics still looks, saying that "in the last 15 years, technology has evolved tremendously, but it still looks largely like a cartoon." At the core of computer graphics today is the process of rasterization, whereby objects are rendered as collections or triangles. It's a struggle to convincingly employ rasters for complex nuances of light and shadow, Huang noted.

AI will create a life-like 'false reality'

Daily Mail - Science & tech

Using artificial intelligence experts have created a'false reality' that is so similar to real-life you may not be able to tell it is a simulation.

Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin


Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."