A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. The new program, dubbed GauGAN, after famous French impressionist Paul Gaugin, uses a tool called generative adversarial networks (GAN) to interpret simple lines and convert them into hyper-realistic images. Its application could help professionals across a range of disciplines such as architecture and urban planning render images and visualizations faster and with greater accuracy, according to the company. A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. Simple shapes become mountains and lakes with just a stroke of what NVIDIA calls a'smart paintbrush' Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.
Today at Nvidia GTC 2019, the company unveiled a stunning image creator. Using generative adversarial networks, users of the software are with just a few clicks able to sketch images that are nearly photorealistic. The software will instantly turn a couple of lines into a gorgeous mountaintop sunset. This is MS Paint for the AI age. Called GauGAN, the software is just a demonstration of what's possible with Nvidia's neural network platforms.
Researchers at Nvidia have created a new generative adversarial network model for producing realistic landscape images from a rough sketch or segmentation map, and while it's not perfect, it is certainly a step towards allowing people to create their own synthetic scenery. The GauGAN model is initially being touted as a tool to help urban planners, game designers, and architects quickly create synthetic images. The model was trained on over a million images, including 41,000 from Flickr, with researchers stating it acts as a "smart paintbrush" as it fills in the details on the sketch. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images."
With a little help from AI, you can now create a Bob Ross-style landscape in seconds. In March, researchers from NVIDIA unveiled GauGAN, a system that uses AI to transform images scribbled onto a Microsoft Paint-like canvas into photorealistic landscapes -- just choose a label such as "water," "tree," or "mountain" the same way you'd normally choose a color, and the AI takes care of the rest. At the time, they described GauGAN as a "smart paintbrush" -- and now, they've released an online beta demo so you can try it out for yourself. The level of detail included in NVIDIA's system is remarkable. Draw a vertical line with a circle at the top using the "tree" label, for example, and the AI knows to make the bottom part the trunk and the top part the leaves.
GauGAN, named after post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene. Artists can use paintbrush and paint bucket tools to design their own landscapes with labels like river, rock and cloud. A style transfer algorithm allows creators to apply filters -- changing a daytime scene to sunset, or a photorealistic image to a painting.