The Algorithms Behind Probabilistic Programming
Morever, these algorithms are robust, so don't require problem-specific hand-tuning. One powerful example is sampling from an arbitrary probability distribution, which we need to do often (and efficiently!) when doing inference. The brute force approach, rejection sampling, is problematic because acceptance rates are low: as only a tiny fraction of attempts generate successful samples, the algorithms are slow and inefficient. See this post by Jeremey Kun for further details. Until recently, the main alternative to this naive approach was Markov Chain Monte Carlo sampling (of which Metropolis Hastings and Gibbs sampling are well-known examples). If you used Bayesian inference in the 90s or early 2000s, you may remember BUGS (and WinBUGS) or JAGS, which used these methods. These remain popular teaching tools (see e.g.
Feb-2-2017, 02:25:17 GMT
- Country:
- North America > United States > New York (0.05)
- Industry:
- Banking & Finance > Real Estate (0.32)