Reviews: Conditional Generative Moment-Matching Networks
–Neural Information Processing Systems
The naive approach of extending GMMNs to conditional setting is to estimate a GMMN for each conditional distribution, and all these conditional distributions share parameters through the use of the same neural network. The problem of this approach is that each conditional distribution only has very few examples, and in the case of continuous domain for the conditioning variables, each conditional distribution may only have one single example, causing data sparsity problem. The proposed approach treats all the conditional distributions as a family and tries to match the model with the conditional embedding operator directly rather than matching each individual conditional distributions. The advantage of the proposed approach seems clear, but in some cases I can still see the naive approach do a reasonable job, for example in conditional generation where the conditioning variable takes one of 10 values as in MNIST. It would be interesting to compare to such a naive approach as a baseline.
Neural Information Processing Systems
Oct-8-2024, 14:28:09 GMT
- Technology: