Goto

Collaborating Authors

 posterior collapse


Posterior Collapse of a Linear Latent Variable Model

Neural Information Processing Systems

This work identifies the existence and cause of a type of posterior collapse that frequently occurs in the Bayesian deep learning practice. For a general linear latent variable model that includes linear variational autoencoders as a special case, we precisely identify the nature of posterior collapse to be the competition between the likelihood and the regularization of the mean due to the prior. Our result suggests that posterior collapse may be related to neural collapse and dimensional collapse and could be a subclass of a general problem of learning for deeper architectures.






ImprovingVariationalAutoencoderswithDensity Gap-based Regularization

Neural Information Processing Systems

On that basis, we hypothesize that these two problems stem from the conflict between the KL regularization inELBo andthefunction definition oftheprior distribution. Assuch, wepropose a novel regularization to substitute the KL regularization in ELBo for VAEs, which isbased on the density gapbetween the aggregated posterior distribution and the prior distribution.




47951a40efc0d2f7da8ff1ecbfde80f4-Paper.pdf

Neural Information Processing Systems

Modeling the behavior of intelligent agents is an essential subject for autonomous systems. Safe operations of autonomous agents require accurate prediction of other agents' future motions.


Out-of-DistributionDetectionwithAnAdaptive LikelihoodRatioonInformativeHierarchicalVAE

Neural Information Processing Systems

Unsupervised out-of-distribution (OOD) detection is essential for the reliability ofmachine learning. Inthe literature, existing work has shown that higher-level semantics captured by hierarchical VAEs can be used to detect OOD instances.