Reviews: Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders

Neural Information Processing Systems 

It uses ideas similar to very recent/concurrent work (Ganea et al., 2018; Ovinnikov, 2018; Nagano et al., 2019), but it is made clear how this work differs from related work. Quality: The submission seems technically sound, with detailed experimental results. The paper empirically compares their approach mostly with their Euclidean counterpart. This is fair, of course, but it would be interesting to see how it compares empirically with the Poincaré Wasserstein Autoencoder (Ovinnikov, 2019) and the hyperboloid model of Nagano et al. (2019), like do they yield similar latent representations, how are the respective sample qualities? The background on Riemannian geometry is to the point, so that the paper is in most parts accessible to readers without training in non-Euclidean geometry. Nevertheless, I feel that readers could benefit from more high-level guidance in Appendix B, like what do we learn from Section B.8 and B.9? -Significance: I feel that this is a significant work and others can build on these ideas either methodologically or experimentally.