Refining Deep Generative Models via Wasserstein Gradient Flows
Ansari, Abdul Fatir, Ang, Ming Liang, Soh, Harold
Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data. However, generation quality is generally inconsistent for any given model and can vary dramatically between samples. We introduce Discriminator Gradient f low (DGf low), a new technique that improves generated samples via the gradient flow of entropy-regularized f-divergences between the real and the generated data distributions. The gradient flow takes the form of a nonlinear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process. By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN). Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows. Empirical results on multiple synthetic, image, and text datasets demonstrate that DGf low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods. Deep generative models (DGMs) have excelled at numerous tasks, from generating realistic images (Brock et al., 2019) to learning policies in reinforcement learning (Ho & Ermon, 2016).
Dec-1-2020
- Country:
- Asia > Middle East (0.14)
- Europe > United Kingdom (0.14)
- Genre:
- Research Report (1.00)
- Technology: