Goto

Collaborating Authors

 graphical-gan





Graphical Generative Adversarial Networks

Chongxuan LI, Max Welling, Jun Zhu, Bo Zhang

Neural Information Processing Systems

Graphical-GAN uses deep implicit likelihood functions [10] to model complex data. Graphical-GAN is sufficiently flexible to model structured data but the inference and learning are challenging due to the presence of deep implicit likelihoods and complex structures.


Reviews: Graphical Generative Adversarial Networks

Neural Information Processing Systems

This paper proposes Graphical-GAN, a variant of GAN that combines the expressivity of Graphical Models (in particular, Bayesian nets) with the generative inductive bias of Generative Adversarial Networks. For highly structured latent variables, such as the ones considered in this work, the discriminator's task of distinguishing X,Z samples from the two distributions can be different. As a second major contribution, the work proposes a learning procedure inspired by Expectation Propogation (EP). Here, the factorization structure of the graphical model is explicitly exploited to make the task of the discriminator "easier" by comparing only subsets of variables. Finally, the authors perform experiments for controlled generation using a GAN model with a mixture of Gaussians prior, and a State-Space structure to empirically validate their approach.


Graphical Generative Adversarial Networks

LI, Chongxuan, Welling, Max, Zhu, Jun, Zhang, Bo

Neural Information Processing Systems

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We generalize the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively. Papers published at the Neural Information Processing Systems Conference.


Graphical Generative Adversarial Networks

LI, Chongxuan, Welling, Max, Zhu, Jun, Zhang, Bo

Neural Information Processing Systems

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. Graphical-GAN conjoins the power of Bayesian networks on compactly representing the dependency structures among random variables and that of generative adversarial networks on learning expressive dependency functions. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We generalize the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly. Finally, we present two important instances of Graphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively.


Graphical Generative Adversarial Networks

LI, Chongxuan, Welling, Max, Zhu, Jun, Zhang, Bo

Neural Information Processing Systems

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. Graphical-GAN conjoins the power of Bayesian networks on compactly representing the dependency structures among random variables and that of generative adversarial networks on learning expressive dependency functions. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We generalize the Expectation Propagation (EP) algorithm to learn the generative model and recognition model jointly. Finally, we present two important instances of Graphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively.


Graphical Generative Adversarial Networks

Li, Chongxuan, Welling, Max, Zhu, Jun, Zhang, Bo

arXiv.org Machine Learning

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data. Graphical-GAN conjoins the power of Bayesian networks on compactly representing the dependency structures among random variables and that of generative adversarial networks on learning expressive dependency functions. We introduce a structured recognition model to infer the posterior distribution of latent variables given observations. We propose two alternative divergence minimization approaches to learn the generative model and recognition model jointly. The first one treats all variables as a whole, while the second one utilizes the structural information by checking the individual local factors defined by the generative model and works better in practice. Finally, we present two important instances of Graphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), which can successfully learn the discrete and temporal structures on visual datasets, respectively.