Goto

Collaborating Authors

Results


NVIDIA AI Lets You See What Your Pet Would Look Like If It Were A Meerkat

#artificialintelligence

One of NVIDIA's many different artificial intelligence projects (and by far the best one to date) lets you envision what your pet might look like it it were a meerkat. In case you didn't know, NVIDIA has its own research group dedicated solely to research into AI, and that includes developing new AI systems and agents which can do some pretty neat things. As the researchers say, although they take AI research very seriously, there's still no excuse not to have some fun with the products of their labors. It's the name given to an AI system they developed around a year ago which can generate a selection of images that are sorts of translations of your own pet's face into what said pet might look like if they were other types of animals. "With GANimal, you can bring your pet's alter ego to life by projecting their expression and pose onto other animals," explain the developers.


For Pac-Man's 40th birthday, Nvidia uses AI to make new levels

PCWorld

Pac-Man turns 40 today, and even though the days of quarter-munching arcade machines in hazy bars are long behind us, the legendary game's still helping to push the industry forward. On Friday, Nvidia announced that its researchers have trained an AI to create working Pac-Man games without teaching it about the game's rules or giving it access to an underlying game engine. Nvidia's "GameGAN" simply watched 50,000 Pac-Man games to learn the ropes. That's an impressive feat in its own right, but Nvidia hopes the "generative adversarial network" (GAN) technology underpinning the project can be used in the future to help developers create games faster and train autonomous robots. "This is the first research to emulate a game engine using GAN-based neural networks," Nvidia researcher Seung-Wook Kim said in a press release.


NVIDIA's AI built Pac-Man from scratch in four days

Engadget

When Pac-Man hit arcades on May 22nd 1980, it held the record for time spent in development having taken a whopping 17 months to design, code and complete. Now, 40 years later to the day, NVIDIA needed just four days to train its new GameGAN AI to wholly recreate it based only on watching another AI play through. Dubbed GameGAN, it's a generative adversarial network (hence, GAN) similar to those used to generate (and detect) photo-realistic images of people that do not exist. The generator is trained on a large sample dataset and then instructed to generate an image based on what it saw. The discriminator then compares the generated image to the sample dataset to determine how close the two resemble one another.


Nvidia's bleeding-edge Ampere GPU architecture revealed: 5 things PC gamers need to know

PCWorld

Nearly a year and a half after the GeForce RTX 20-series launched with Nvidia's Turing architecture inside, and three years after the launch of the data center-focused Volta GPUs, CEO Jensen Huang unveiled graphics cards powered by the new Ampere architecture during a digital GTC 2020 keynote on Thursday morning. It looks like an absolute monster. Ampere debuts in the form of the A100, a humongous data center GPU powering Nvidia's new DGX-A100 systems. Make no mistake: This 6,912 CUDA core-packing beast targets data scientists, with internal hardware optimized around deep learning tasks. You won't be using it to play Cyberpunk 2077.


Top 25 AI chip companies: A macro step change inferred from the micro scale

#artificialintelligence

One of the effects of the ongoing trade war between the US and China is likely to be the accelerated development of what are being called "artificial intelligence chips", or AI chips for short, also sometimes referred to as AI accelerators. AI chips could play a critical role in economic growth going forward because they will inevitably feature in cars, which are becoming increasingly autonomous; smart homes, where electronic devices are becoming more intelligent; robotics, obviously; and many other technologies. AI chips, as the term suggests, refers to a new generation of microprocessors which are specifically designed to process artificial intelligence tasks faster, using less power. Obvious, you might think, but some might wonder what the difference between an AI chip and a regular chip would be when all chips of any type process zeros and ones – a typical processor, after all, is actually capable of AI tasks. Graphics-processing units are particularly good at AI-like tasks, which is why they form the basis for many of the AI chips being developed and offered today. Without getting out of our depth, while a general microprocessor is an all-purpose system, AI processors are embedded with logic gates and highly parallel calculation systems that are more suited to typical AI tasks such as image processing, machine vision, machine learning, deep learning, artificial neural networks, and so on. Maybe one could use cars as metaphors. A general microprocessor is your typical family car that might have good speed and steering capabilities.


AWS Announces NVIDIA GPU instances for it G4 Elastic Compute Cloud

#artificialintelligence

The News: Friday 20th September saw news out of Amazon Web Services (AWS) about a renewed (expanded) partnership with industry leading Graphics Processor Unit (GPU) manufacturer NVIDIA to offer improved GPU based cloud instances. The adoption of GPU technology has expanded from the use of these specialist processors for solely graphics acceleration to other commercial uses from everything from Blockchain mining to inference engines as part of Machine Learning applications. Analyst Take: This is an important announcement as we see best in class cloud services meet best in class machine learning capabilities bringing AI as a Service into the consumption model that continues to grow in favor. Let's unpack the news piece by piece. GPU powered Elastic Compute Cloud (EC2) instances.


Nvidia unveiled a new AI engine that renders virtual world's in real time – Fanatical Futurist by International Keynote Speaker Matthew Griffin

#artificialintelligence

Nvidia have announced that they've introduced a new Artificial Intelligence (AI) Deep Learning model that "aims to catapult the graphics industry into the AI Age," and the result is the first ever interactive AI rendered virtual world. In short, Nvidia now has an AI capable of rendering high definition virtual environments, that can be used to create Virtual Reality (VR) games and simulations, in real time, and that's big because it takes the effort and cost out of having to design and make them from scratch, which has all sorts of advantages. In order to work their magic the researchers used what they called a Conditional Generative Neural Network as a starting point and then trained a neural network to render new 3D environments, and now the breakthrough will allow developers and artists of all kinds to create new interactive 3D virtual worlds based on videos from the real world, dramatically lowering the cost and time it takes to create virtual worlds. "NVIDIA has been creating new ways to generate interactive graphics for 25 years – and this is the first time we can do this with a neural network," said the leader of the Nvidia researchers Bryan Catanzaro, Vice President of Applied Deep Learning at Nvidia. "Neural networks – specifically – generative models like these are going to change the way graphics are created."


Nvidia AI turns sketches into photorealistic landscapes in seconds

#artificialintelligence

Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age. Called GauGAN, the software is just a demonstration of what's possible with Nvidia's neural network platforms.


Nvidia has created the first video game demo using AI-generated graphics

#artificialintelligence

The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality. "It's a new way to render video content using deep learning," Nvidia's vice president of applied deep learning, Bryan Catanzaro, told The Verge. "Obviously Nvidia cares a lot about generating graphics [and] we're thinking about how AI is going to revolutionize the field."


CES 2019: Nvidia CEO Huang explains how AI changes everything

ZDNet

Nvidia's chief executive, Jensen Huang, took to the stage of the ballroom at the MGM Grand hotel in Las Vegas on Sunday night, the opening night of the Consumer Electronics Show, to tell those assembled that AI, especially deep learning, is fundamentally changing his company's business of creating lifelike computer graphics. The traditional graphics pipeline is yielding to neural network approaches, accelerated by newer on-chip circuitry, so that physics simulation and sampling of real-world details are taking over from the traditional practice of painting polygons on the screen to simulate objects and their environment. Huang pointed to how primitive a lot of graphics still looks, saying that "in the last 15 years, technology has evolved tremendously, but it still looks largely like a cartoon." At the core of computer graphics today is the process of rasterization, whereby objects are rendered as collections or triangles. It's a struggle to convincingly employ rasters for complex nuances of light and shadow, Huang noted.