Goto

Collaborating Authors

 deep generative model


Bias and Generalization in Deep Generative Models: An Empirical Study

Neural Information Processing Systems

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets. By measuring properties of the learned distribution, we are able to find interesting patterns of generalization. We verify that these patterns are consistent across datasets, common models and architectures.





General response (R1, R2, R3)

Neural Information Processing Systems

Dear Reviewers, we thank you for taking the time to provide valuable feedback. Below we address the main issues raised. Its performance depends on our ability to predict the distribution over future frames with low entropy. We will emphasize these aspects more in a revised version. RNNs to model dynamics in the latent space.



Feature Likelihood Divergence: Evaluating the Generalization of Generative Models Using Samples

Neural Information Processing Systems

However, current methods for evaluating such models remain incomplete: standard likelihood-based metrics do not always apply and rarely correlate with perceptual fidelity, while sample-based metrics, such as FID, are insensitive to overfitting, i.e., inability to generalize beyond the training set.