Collaborating Authors

Overview of GANs (Generative Adversarial Networks) – Part I


The purpose of this article series is to provide an overview of GAN research and explain the nature of the contributions. I'm new to this area myself, so this will surely be incomplete, but hopefully it can provide some quick context to other newbies. For Part I we'll introduce GANs at a high level and summarize the original paper. It's assumed you're familiar with the basics of neural networks. What is meant by generative?

Investigating Under and Overfitting in Wasserstein Generative Adversarial Networks Machine Learning

We investigate under and overfitting in Generative Adversarial Networks (GANs), using discriminators unseen by the generator to measure generalization. We find that the model capacity of the discriminator has a significant effect on the generator's model quality, and that the generator's poor performance coincides with the discriminator underfitting. Contrary to our expectations, we find that generators with large model capacities relative to the discriminator do not show evidence of overfitting on CIFAR10, CIFAR100, and CelebA.

A Gentle Introduction to the Progressive Growing GAN


Progressive Growing GAN is an extension to the GAN training process that allows for the stable training of generator models that can output large high-quality images. It involves starting with a very small image and incrementally adding blocks of layers that increase the output size of the generator model and the input size of the discriminator model until the desired image size is achieved. This approach has proven effective at generating high-quality synthetic faces that are startlingly realistic. In this post, you will discover the progressive growing generative adversarial network for generating large images. Discover how to develop DCGANs, conditional GANs, Pix2Pix, CycleGANs, and more with Keras in my new GANs book, with 29 step-by-step tutorials and full source code.

Intro to Adversarial Machine Learning and Generative Adversarial Networks - KDnuggets


Machine learning is an ever-evolving field, so it can be easy to feel like you're out of the loop on the latest developments changing the world this week. One of those emerging areas that have been getting a lot of buzz lately is GANs--or generative adversarial networks. So to keep you in the machine learning loop, we've put together a short crash course on GANs: With generative models, the aim is to model the distribution of a given dataset. For the generative models that we're talking about today, that dataset is usually a set of images, but it could also be other kinds of data, like audio samples or time-series data. There are two ways to go about getting a model of this distribution: implicitly or explicitly.

Training Generative Adversarial Networks Via Turing Test Machine Learning

In this article, we introduce a new mode for training Generative Adversarial Networks (GANs). Rather than minimizing the distance of evidence distribution $\tilde{p}(x)$ and the generative distribution $q(x)$, we minimize the distance of $\tilde{p}(x_r)q(x_f)$ and $\tilde{p}(x_f)q(x_r)$. This adversarial pattern can be interpreted as a Turing test in GANs. It allows us to use information of real samples during training generator and accelerates the whole training procedure. We even find that just proportionally increasing the size of discriminator and generator, it succeeds on 256x256 resolution without adjusting hyperparameters carefully.