Goto

Collaborating Authors

Improving the Realism of Synthetic Images - Apple

#artificialintelligence

Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we've developed a method for refining synthetic images to make them look more realistic.


Generative Adversarial Networks: The Fight To See Which Neural Network Comes Out At Top

#artificialintelligence

In a world filled with technology and artificial intelligence, it is becoming increasingly harder to distinguish between what is real and what is fake. Look at these two pictures below. Can you tell which one is a real-life photograph and which one is created by artificial intelligence? The crazy thing is that both of these images are actually fake, created by NVIDIA's new hyperrealistic face generator, which uses an algorithmic architecture called a generative adversarial network (GANs). Researching more into GANs and their applications in today's society, I found that they can be used everywhere, from text to image generation to even predicting the next frame in a video!


Composable Unpaired Image to Image Translation

arXiv.org Machine Learning

There has been remarkable recent work in unpaired image-to-image translation. However, they're restricted to translation on single pairs of distributions, with some exceptions. In this study, we extend one of these works to a scalable multidistribution translation mechanism. Our translation models not only converts from one distribution to another but can be stacked to create composite translation functions. We show that this composite property makes it possible to generate images with characteristics not seen in the training set. We also propose a decoupled training mechanism to train multiple distributions separately, which we show, generates better samples than isolated joint training. Further, we do a qualitative and quantitative analysis to assess the plausibility of the samples. The code is made available at https://github.com/lgraesser/im2im2im.


How to Develop a Pix2Pix GAN for Image-to-Image Translation

#artificialintelligence

The Pix2Pix Generative Adversarial Network, or GAN, is an approach to training a deep convolutional neural network for image-to-image translation tasks. The careful configuration of architecture as a type of image-conditional GAN allows for both the generation of large images compared to prior GAN models (e.g. In this tutorial, you will discover how to develop a Pix2Pix generative adversarial network for image-to-image translation. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code. How to Develop a Pix2Pix Generative Adversarial Network for Image-to-Image Translation Photo by European Southern Observatory, some rights reserved. Pix2Pix is a Generative Adversarial Network, or GAN, model designed for general purpose image-to-image translation. The approach was presented by Phillip Isola, et al. in their 2016 paper titled "Image-to-Image Translation with Conditional Adversarial Networks" and presented at CVPR in 2017. The GAN architecture is comprised of a generator model for outputting new plausible synthetic images, and a discriminator model that classifies images as real (from the dataset) or fake (generated).


Towards Perceptual Image Dehazing by Physics-Based Disentanglement and Adversarial Training

AAAI Conferences

Single image dehazing is a challenging under-constrained problem because of the ambiguities of unknown scene radiance and transmission. Previous methods solve this problem using various hand-designed priors or by supervised training on synthetic hazy image pairs. In practice, however, the predefined priors are easily violated and the paired image data is unavailable for supervised training. In this work, we propose Disentangled Dehazing Network, an end-to-end model that generates realistic haze-free images using only unpaired supervision. Our approach alleviates the paired training constraint by introducing a physical-model based disentanglement and reconstruction mechanism. A multi-scale adversarial training is employed to generate perceptually haze-free images. Experimental results on synthetic datasets demonstrate our superior performance compared with the state-of-the-art methods in terms of PSNR, SSIM and CIEDE2000. Through training on purely natural haze-free and hazy images from our collected HazyCity dataset, our model can generate more perceptually appealing dehazing results.