Goto

Collaborating Authors

 welling





BIVA: A Very Deep Hierarchy of Latent Variables for Generative Modeling

Lars Maaløe, Marco Fraccaro, Valentin Liévin, Ole Winther

Neural Information Processing Systems

However, their performance in terms of test likelihood and quality of generated samples has been surpassed by autoregressive models without stochastic units. Furthermore, flow-based models have recently been shown to be an attractive alternative that scales well to high-dimensional data.






Invariant Representations without Adversarial Training

Daniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, Greg Ver Steeg

Neural Information Processing Systems

We show that adversarial training is unnecessary and sometimes counter-productive; we instead cast invariant representation learning asasingle information-theoretic objectivethat can bedirectly optimized.


ec51d1fe4bbb754577da5e18eb54e6d1-Paper-Conference.pdf

Neural Information Processing Systems

Frequently,transformations occurring in data can be better represented by a subset of a group than by agroup asawhole, e.g., rotations in[ 90,90 ]. Insuch cases, amodel that respects equivariancepartially is better suited to represent the data.