A Variational Inequality Perspective on Generative Adversarial Nets

Gidel, Gauthier, Berard, Hugo, Vincent, Pascal, Lacoste-Julien, Simon

arXiv.org Machine Learning 

Stability has been a recurrent issue in training generative adversarial networks (GANs). One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods specifically designed for this adversarial training. In this work, we review the "variational inequality" framework which contains most formulations of the GAN objective introduced so far. Taping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend standard methods designed for variational inequalities to GANs training, such as a stochastic version of the extragradient method, and empirically investigate their behavior on GANs.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found