training generative neural sampler
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Generative neural networks are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any $f$-divergence can be used for training generative neural networks. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
Reviews: f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Technical quality I am currently on the fence with respect to technical quality, but hope the authors can clarify the following in the rebuttal. The starting point for the method is a divergence D_f(P Q) which we aim to minimize. Unfortunately, the mini-max objective function of Eq. (6) is a lower-bound on this divergence. This seems problematic as optimizing Eq (6) would then not guarantee anything with respect to the original divergence, regardless of how tight the bound is. This is in stark contrast to variational EM, which maximizes a lower-bound on the log-likelihood, a quantity we also aim to maximize.
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f-divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)
f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
Nowozin, Sebastian, Cseke, Botond, Tomioka, Ryota
Generative neural networks are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generative-adversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any $f$-divergence can be used for training generative neural networks. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Middle East > Jordan (0.04)