A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. The new program, dubbed GauGAN, after famous French impressionist Paul Gaugin, uses a tool called generative adversarial networks (GAN) to interpret simple lines and convert them into hyper-realistic images. Its application could help professionals across a range of disciplines such as architecture and urban planning render images and visualizations faster and with greater accuracy, according to the company. A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. Simple shapes become mountains and lakes with just a stroke of what NVIDIA calls a'smart paintbrush' Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.
Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age. Called GauGAN, the software is just a demonstration of what's possible with Nvidia's neural network platforms.
Researchers at Nvidia have created a new generative adversarial network model for producing realistic landscape images from a rough sketch or segmentation map, and while it's not perfect, it is certainly a step towards allowing people to create their own synthetic scenery. The GauGAN model is initially being touted as a tool to help urban planners, game designers, and architects quickly create synthetic images. The model was trained on over a million images, including 41,000 from Flickr, with researchers stating it acts as a "smart paintbrush" as it fills in the details on the sketch. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images."
An online tool has been unveiled which is capable of bringing black and white photographs to life instantaneously by adding colour to them using artificial intelligence. Colourisation of old images is a normally time consuming process which requires specialist training and expensive software. The tool, ColouriseSG, is able to do it for free from only a single digital image and works on iconic historical photographs and old family portraits. Try it for yourself here or via the interactive tool below. The artificial intelligence is able to colourise images for free from only a single digital image and works on iconic historical photographs and old family portraits.
While great progress has been made recently in automatic image manipulation, it has been limited to object centric images like faces or structured scene datasets. In this work, we take a step towards general scene-level image editing by developing an automatic interaction-free object removal model. Our model learns to find and remove objects from general scene images using image-level labels and unpaired data in a generative adversarial network (GAN) framework. We achieve this with two key contributions: a two-stage editor architecture consisting of a mask generator and image in-painter that co-operate to remove objects, and a novel GAN based prior for the mask generator that allows us to flexibly incorporate knowledge about object shapes. We experimentally show on two datasets that our method effectively removes a wide variety of objects using weak supervision only.