Variational Inference via \chi Upper Bound Minimization

Dieng, Adji Bousso, Tran, Dustin, Ranganath, Rajesh, Paisley, John, Blei, David

Neural Information Processing Systems 

Variational inference (VI) is widely used as an efficient alternative to Markov chain Monte Carlo. It posits a family of approximating distributions $q$ and finds the closest member to the exact posterior $p$. Closeness is usually measured via a divergence $D(q p)$ from $q$ to $p$. While successful, this approach also has problems. Notably, it typically leads to underestimation of the posterior variance.