First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a

Neural Information Processing Systems 

There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks.