Editor's note: The following is an interview with Columbia University Professor Andrew Gelman conducted by Marketing scientist Kevin Gray, in which Gelman spells out the ABCs of Bayesian statistics. Andrew Gelman: Bayesian statistics uses the mathematical rules of probability to combines data with "prior information" to give inferences which (if the model being used is correct) are more precise than would be obtained by either source of information alone. Classical statistical methods avoid prior distributions. In classical statistics, you might include in your model a predictor (for example), or you might exclude it, or you might pool it as part of some larger set of predictors in order to get a more stable estimate. These are pretty much your only choices.

I did a webcast earlier today about Bayesian statistics. Some time in the next week, the video should be available from O'Reilly. In the meantime, you can see my slides here: And here's a transcript of what I said: Thanks everyone for joining me for this webcast. At the bottom of this slide you can see the URL for my slides, so you can follow along at home. I'm Allen Downey and I'm a professor at Olin College, which is a new engineering college right outside Boston. Our mission is to fix engineering education, and one of the ways I'm working on that is by teaching Bayesian statistics. Bayesian methods have been the victim of a 200 year smear campaign. If you are interested in the history and the people involved, I recommend this book, The Theory That Would Not Die.

This text provides R tutorials on statistics including hypothesis testing, ANOVA and linear regressions. It fulfills popular demands by users of r-tutor.com for exercise solutions and offline access. Part III of the text is about Bayesian statistics. It begins with closed analytic solutions and basic BUGS models for simple examples. Then it covers OpenBUGS for Bayesian ANOVA and regression analysis.

"Speaker: Christopher Fonnesbeck Bayesian statistics offers robust and flexible methods for data analysis that, because they are based on probability models, have the added benefit of being readily interpretable by non-statisticians. Until recently, however, the implementation of Bayesian models has been prohibitively complex for use by most analysts. But, the advent of probabilistic programming has served to abstract the complexity of Bayesian statistics, making such methods more broadly available. PyMC3 is a open-source Python module for probabilistic programming that implements several modern, computationally-intensive statistical algorithms for fitting Bayesian models, including Hamiltonian Monte Carlo (HMC) and variational inference. PyMC3's intuitive syntax is helpful for new users, and the reliance on Theano for much of the computational work has allowed developers to keep the code base simple, making it easy to extend the software to meet analytic needs.

We develop a closed form asymptotic formula to compute the marginal likelihood of data given a naive Bayesian network model with two hidden states and binary features. This formula deviates from the standard BIC score. Our work provides a concrete example that the BIC score is generally not valid for statistical models that belong to a stratified exponential family. This stands in contrast to linear and curved exponential families, where the BIC score has been proven to provide a correct approximation for the marginal likelihood.