Goto

Collaborating Authors


Variational Bayes on Monte Carlo Steroids

Neural Information Processing Systems

Variational approaches are often used to approximate intractable posteriors or normalization constants in hierarchical latent variable models. While often effective in practice, it is known that the approximation error can be arbitrarily large. We propose a new class of bounds on the marginal log-likelihood of directed latent variable models. Our approach relies on random projections to simplify the posterior. In contrast to standard variational methods, our bounds are guaranteed to be tight with high probability. We provide a new approach for learning latent variable models based on optimizing our new bounds on the log-likelihood. We demonstrate empirical improvements on benchmark datasets in vision and language for sigmoid belief networks, where a neural network is used to approximate the posterior.



Bayesian Belief Polarization

Neural Information Processing Systems

Situations in which people with opposing prior beliefs observe the same evidence and then strengthen those existing beliefs are frequently offered as evidence of human irrationality. This phenomenon, termed belief polarization, is typically assumed to be non-normative. We demonstrate, however, that a variety of cases of belief polarization are consistent with a Bayesian approach to belief revision. Simulation results indicate that belief polarization is not only possible but relatively common within the class of Bayesian models that we consider. Papers published at the Neural Information Processing Systems Conference.


BELIEF MAINTENANCE: AN I TAPNTY MANAGEMENT

AAAI Conferences

Much of the work of problem solving or inference lies in structuring exploration of the system's world to reduce this uncertainty. Two general approaches to uncertainty management have become popular. These approaches--symbolic truth maintenance and numeric belief propagation--have been portrayed as rivals in a sometimes acrimonious debate.