Generative Adversarial Neural Networks: Infinite Monkeys and The Great British Bake Off

@machinelearnbot

If you had an infinite number of monkeys typing at keyboards, could you produce Shakespeare? But how you would know once they'd typed Shakespeare? In this example, monkeys are what are called Generators in AI, and the English student who checks their work to see if they have written Shakespeare (or anything good) is called a Discriminator. These are the two components of an Generative Adversarial Neural Network. Adversarial Neural Networks are oddly named since they actually cooperate to make things.


GAN with Keras: Application to Image Deblurring – Sicara's blog

#artificialintelligence

We extract losses at two levels, at the end of the generator and at the end of the full model. The first one is a perceptual loss computed directly on the generator's outputs. This first loss ensures the GAN model is oriented towards a deblurring task. It compares the outputs of the first convolutions of VGG. The second loss is the Wasserstein loss performed on the outputs of the whole model.


It's Getting Hard to Tell If a Painting Was Made by a Computer or a Human

#artificialintelligence

Cultural pundits can close the book on 2017: The biggest artistic achievement of the year has already taken place. It didn't happen in a paint-splattered studio on the outskirts of Beijing, Singapore, or Berlin. It didn't happen at the Venice Biennale. It happened in New Brunswick, New Jersey, just off Exit 9 on the Turnpike. Nobody would mistake this place as an incubator for fine art.



Adversarial Symmetric Variational Autoencoder

Neural Information Processing Systems

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmarks datasets.