A Convex Duality Framework for GANs

Farzan Farnia, David Tse

Neural Information Processing Systems 

Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given an unconstrained discriminator able to approximate any function, this game reduces to finding the generative model minimizing a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is constrained to be in a smaller class F such as convolutional neural nets. Then, a natural question is how the divergence minimization interpretation will change as we constrain F. In this work, we address this question by developing a convex duality framework for analyzing GAN minimax problems. For a convex set F, this duality framework interprets the original vanilla GAN problem as finding the generative model with the minimum JS-divergence to the distributions penalized to match the moments of the data distribution, with the moments specified by the discriminators in F. We show that this interpretation more generally holds for f-GAN and Wasserstein GAN. We further apply the convex duality framework to explain why regularizing the discriminator's Lipschitz constant, e.g.