Collaborating Authors

Investigating Under and Overfitting in Wasserstein Generative Adversarial Networks Machine Learning

We investigate under and overfitting in Generative Adversarial Networks (GANs), using discriminators unseen by the generator to measure generalization. We find that the model capacity of the discriminator has a significant effect on the generator's model quality, and that the generator's poor performance coincides with the discriminator underfitting. Contrary to our expectations, we find that generators with large model capacities relative to the discriminator do not show evidence of overfitting on CIFAR10, CIFAR100, and CelebA.

Generative Adversarial Networks (GANs): An Overview


GAN or Generative Adversarial Network is one of the most fascinating inventions in the field of AI. All the amazing news articles we come across every day, related to machines achieving splendid human-like tasks, are mostly the work of GANs! For instance, if you ever heard of AI bots which create human-like paintings, it is essentially GANs behind the awe-inspiring strokes. Or if you have heard of AI bots which create human faces from scratch, faces which do not even exist yet, that too is entirely the imaginative work of powerful GANs. GANs have a lot of applications and one is often led to wonder how simple machines can achieve such fascinating and in fact, extensively creative accomplishments so efficiently. If you are an observer of the real world, you might have noticed that an individual, whether it be an individual from the animal or plant kingdom, often grows stronger when it faces any sort of competition.

Lower Dimensional Kernels for Video Discriminators Machine Learning

This work presents an analysis of the discriminators used in Generative Adversarial Networks (GANs) for Video. We show that unconstrained video discriminator architectures induce a loss surface with high curvature which make optimisation difficult. We also show that this curvature becomes more extreme as the maximal kernel dimension of video discriminators increases. With these observations in hand, we propose a family of efficient Lower-Dimensional Video Discriminators for GANs (LDVD GANs). The proposed family of discriminators improve the performance of video GAN models they are applied to and demonstrate good performance on complex and diverse datasets such as UCF-101. In particular, we show that they can double the performance of Temporal-GANs and provide for state-of-the-art performance on a single GPU.

Defending Against Adversarial Attacks by Leveraging an Entire GAN Machine Learning

Recent work has shown that state-of-the-art models are highly vulnerable to adversarial perturbations of the input. We propose cowboy, an approach to detecting and defending against adversarial attacks by using both the discriminator and generator of a GAN trained on the same dataset. We show that the discriminator consistently scores the adversarial samples lower than the real samples across multiple attacks and datasets. We provide empirical evidence that adversarial samples lie outside of the data manifold learned by the GAN. Based on this, we propose a cleaning method which uses both the discriminator and generator of the GAN to project the samples back onto the data manifold. This cleaning procedure is independent of the classifier and type of attack and thus can be deployed in existing systems.

Effects of Dataset properties on the training of GANs Machine Learning

- Generative Adversarial Networks are a new family of generative models, frequently used for generating photorealistic images. The theory promises for the GAN to eventually reach an equilibrium where generator produces pictures indistinguishable for the training set. In practice, however, a range of problems frequently prevents the system from reaching this equilibrium, with training not progressing ahead due to instabilities or mode collapse. This paper describes a series of experiments trying to identify patterns in regard to the effect of the training set on the dynamics and eventual outcome of the training. Generating images is a task with many applications. As images are a compact and convenient format for communicating for humans, it is desirable for a computer to be able to generate such, as this would enable users to understand a wide range of messages and information faster and with ease. While there exist multiple software tools for generating images, for example photoshop, they are merely a way for a human to translate their idea into an image and take significant amount of effort and experience.