Goto

Collaborating Authors

GauGAN Turns Doodles into Stunning, Realistic Landscapes NVIDIA Blog

#artificialintelligence

A novice painter might set brush to canvas aiming to create a stunning sunset landscape -- craggy, snow-covered peaks reflected in a glassy lake -- only to end up with something that looks more like a multi-colored inkblot. But a deep learning model developed by NVIDIA Research can do just the opposite: it turns rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images. The interactive app using the model, in a lighthearted nod to the post-Impressionist painter, has been christened GauGAN. GauGAN could offer a powerful tool for creating virtual worlds to everyone from architects and urban planners to landscape designers and game developers.


Nvidia's latest AI tech translates text into landscape images

#artificialintelligence

Nvidia today detailed an AI system called GauGAN2, the successor to its GauGAN model, that lets users create lifelike landscape images that don't exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation in a single tool, GauGAN2 is designed to create photorealistic art with a mix of words and drawings. "Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher-quality of images," Isha Salian, a member of Nvidia's corporate communications team, wrote in a blog post. "Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple of trees in the foreground, or clouds in the sky."


Nvidia unveils incredible 'smart paintbrush' software that uses AI to turn simple doodles into art

Daily Mail - Science & tech

A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. The new program, dubbed GauGAN, after famous French impressionist Paul Gaugin, uses a tool called generative adversarial networks (GAN) to interpret simple lines and convert them into hyper-realistic images. Its application could help professionals across a range of disciplines such as architecture and urban planning render images and visualizations faster and with greater accuracy, according to the company. A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. Simple shapes become mountains and lakes with just a stroke of what NVIDIA calls a'smart paintbrush' Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.


NVIDIA's Canvas app turns doodles into AI-generated 'photos'

Engadget

NVIDIA has launched a new app you can use to paint life-like landscape images -- even if you have zero artistic skills and a first grader can draw better than you. The new application is called Canvas, and it can turn childlike doodles and sketches into photorealistic landscape images in real time. It's now available for download as a free beta, though you can only use it if your machine is equipped with an NVIDIA RTX GPU. Canvas is powered by the GauGAN AI painting tool, which NVIDIA Research developed and trained using 5 million images. When the company first introduced GauGAN to the world, NVIDIA VP Bryan Catanzaro, described its technology as a "smart paintbrush."


Nvidia GauGAN takes rough sketches and creates 'photo-realistic' landscape images

ZDNet

Researchers at Nvidia have created a new generative adversarial network model for producing realistic landscape images from a rough sketch or segmentation map, and while it's not perfect, it is certainly a step towards allowing people to create their own synthetic scenery. The GauGAN model is initially being touted as a tool to help urban planners, game designers, and architects quickly create synthetic images. The model was trained on over a million images, including 41,000 from Flickr, with researchers stating it acts as a "smart paintbrush" as it fills in the details on the sketch. "It's like a colouring book picture that describes where a tree is, where the sun is, where the sky is," Nvidia vice president of applied deep learning research Bryan Catanzaro said. "And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colours, based on what it has learned about real images."