simultaneous manifold learning
Flows for simultaneous manifold learning and density estimation
We introduce manifold-learning flows (ℳ-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent data sets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. In a range of experiments we demonstrate how ℳ-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.
Review for NeurIPS paper: Flows for simultaneous manifold learning and density estimation
In lines 245-248 the authors discuss a fair comparison between the different methods and mention their effort to keep the total number of coupling layers the same between several methods the same. Can the authors please also comment on the difference in the number of parameters? As the coupling layers in M-Flows don't always act on data of the same dimensionality as regular AF flows, the number of parameters can be different, even with the same number of coupling layers. For the celebA dataset, have you tried to train M-Flows with different n then 512? 4. Can you explain in the main text on a high level why including the SCANDAL loss consistently leads to a larger closure for all methods (lower closure is better). In general, since the supplementary material contains so much more material, it would help the reader if you refer more frequently to the relevant parts of the supplementary material in the main text.
Review for NeurIPS paper: Flows for simultaneous manifold learning and density estimation
All reviewers agree that the presented technique for simultaneous manifold and density estimation is interesting and novel. However, they also agree that the paper leaves important questions open. While one of the reviewers would like to see a stronger statistical analysis before acceptance, the others believe that the paper is above acceptance threshold and that the community would benefit from its communication. To address the concerns of the reviewers, the camera-ready paper needed to include at least the following results: 1. Include results that investigate if the invertible nature of the normalising flow in the decoder is useful by e.g considering a version of the Me-flow where g is not constrained to be invertible. In the same vein, a comparison with a simple VAE baseline should be included. Investigate how the results on CelebA depend on the latent dimension n.
Flows for simultaneous manifold learning and density estimation
We introduce manifold-learning flows (ℳ-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent data sets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates. In a range of experiments we demonstrate how ℳ-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.