Bayesian Inference: Instructional Materials


Modelling trait dependent speciation with Approximate Bayesian Computation

arXiv.org Machine Learning

Phylogeny is the field of modelling the temporal discrete dynamics of speciation. Complex models can nowadays be studied using the Approximate Bayesian Computation approach which avoids likelihood calculations. The field's progression is hampered by the lack of robust software to estimate the numerous parameters of the speciation process. In this work we present an R package, pcmabc, based on Approximate Bayesian Computations, that implements three novel phylogenetic algorithms for trait-dependent speciation modelling. Our phylogenetic comparative methodology takes into account both the simulated traits and phylogeny, attempting to estimate the parameters of the processes generating the phenotype and the trait. The user is not restricted to a predefined set of models and can specify a variety of evolutionary and branching models. We illustrate the software with a simulation-reestimation study focused around the branching Ornstein-Uhlenbeck process, where the branching rate depends non-linearly on the value of the driving Ornstein-Uhlenbeck process. Included in this work is a tutorial on how to use the software.


A tutorial on MDL hypothesis testing for graph analysis

arXiv.org Machine Learning

When analysing graph structure, it can be difficult to determine whether patterns found are due to chance, or due to structural aspects of the process that generated the data. Hypothesis tests are often used to support such analyses. These allow us to make statistical inferences about which null models are responsible for the data, and they can be used as a heuristic in searching for meaningful patterns. The minimum description length (MDL) principle [6, 4] allows us to build such hypothesis tests, based on efficient descriptions of the data. Broadly: we translate the regularity we are interested in into a code for the data, and if this code describes the data more efficiently than a code corresponding to the null model, by a sufficient margin, we may reject the null model. This is a frequentist approach to MDL, based on hypothesis testing. Bayesian approaches to MDL for model selection rather than model rejection are more common, but for the purposes of pattern analysis, a hypothesis testing approach provides a more natural fit with existing literature. 1 We provide a brief illustration of this principle based on the running example of analysing the size of the largest clique in a graph. We illustrate how a code can be constructed to efficiently represent graphs with large cliques, and how the description length of the data under this code can be compared to the description length under a code corresponding to a null model to show that the null model is highly unlikely to have generated the data.


A Gentle Introduction to Maximum Likelihood Estimation

@machinelearnbot

The first time I heard someone use the term maximum likelihood estimation, I went to Google and found out what it meant. Then I went to Wikipedia to find out what it really meant. To spare you the wrestling required to understand and incorporate MLE into your data science workflow, ethos, and projects, I've compiled this guide. This is funny (if you follow this strange domain of humor), and mostly right about the differences between the two camps. Not minding that our Sun going into nova is not really a repeatable experiment -- sorry, frequentists!


Bayesian Statistics Coursera

@machinelearnbot

About this course: This course describes Bayesian statistics, in which one's inferences about parameters or hypotheses are updated as evidence accumulates. You will learn to use Bayes' rule to transform prior probabilities into posterior probabilities, and be introduced to the underlying theory and perspective of the Bayesian paradigm. The course will apply Bayesian methods to several practical problems, to show end-to-end Bayesian analyses that move from framing the question to building models to eliciting prior probabilities to implementing in R (free statistical software) the final posterior distribution. Additionally, the course will introduce credible regions, Bayesian comparisons of means and proportions, Bayesian regression and inference using multiple models, and discussion of Bayesian prediction. We assume learners in this course have background knowledge equivalent to what is covered in the earlier three courses in this specialization: "Introduction to Probability and Data," "Inferential Statistics," and "Linear Regression and Modeling."


Madrid Advanced Statistics and Data Mining Summer School

@machinelearnbot

The Madrid ASDM summer school is in its thirteenth edition this year, with hundreds of students from all over the world having attended so far. It comprises 12 intensive (15 lecture hours) week-long courses, and a student may attend from one up to six courses. The courses cover topics such as Neural Networks and Deep Learning, Bayesian Networks, Big Data with Apache Spark, Bayesian Inference, Text Mining and Time Series. Each course has theoretical and practical classes, the latter done with R or python. While the summer school is mainly attended by people from academia - PhD students and researchers-, people from the industry also assist.


Bayesian Computational Analyses with R Udemy

@machinelearnbot

Bayesian Computational Analyses with R is an introductory course on the use and implementation of Bayesian modeling using R software. The Bayesian approach is an alternative to the "frequentist" approach where one simply takes a sample of data and makes inferences about the likely parameters of the population. In contrast, the Bayesian approach uses both likelihood functions and a sample of observed data (the'prior') to estimate the most likely values and distributions for the estimated population parameters (the'posterior'). The course is useful to anyone who wishes to learn about Bayesian concepts and is suited to both novice and intermediate Bayesian students and Bayesian practitioners. It is both a practical, "hands-on" course with many examples using R scripts and software, and is conceptual, as the course explains the Bayesian concepts. All materials, software, R scripts, slides, exercises and solutions are included with the course materials. It is helpful to have some grounding in basic inferential statistics and probability theory. No experience with R is necessary, although it is also helpful.


Bayesian Statistics: Techniques and Models Coursera

@machinelearnbot

About this course: This is the second of a two-course sequence introducing the fundamentals of Bayesian statistics. It builds on the course Bayesian Statistics: From Concept to Data Analysis, which introduces Bayesian methods through use of simple conjugate models. Real-world data often require more sophisticated models to reach realistic conclusions. This course aims to expand our "Bayesian toolbox" with more general models, and computational techniques to fit them. In particular, we will introduce Markov chain Monte Carlo (MCMC) methods, which allow sampling from posterior distributions that have no analytical solution.


Getting Started with Particle Metropolis-Hastings for Inference in Nonlinear Dynamical Models

arXiv.org Machine Learning

This tutorial provides a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state-space models together with a software implementation in the statistical programming language R. We employ a step-by-step approach to develop an implementation of the PMH algorithm (and the particle filter within) together with the reader. This final implementation is also available as the package pmhtutorial in the CRAN repository. Throughout the tutorial, we provide some intuition as to how the algorithm operates and discuss some solutions to problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian state-space model with synthetic data and a nonlinear stochastic volatility model with real-world data.


Variational Bayesian inference for linear and logistic regression

arXiv.org Machine Learning

The article describe the model, derivation, and implementation of variational Bayesian inference for linear and logistic regression, both with and without automatic relevance determination. It has the dual function of acting as a tutorial for the derivation of variational Bayesian inference for simple models, as well as documenting, and providing brief examples for the MATLAB functions that implement this inference. These functions are freely available online.


40 Python Statistics For Data Science Resources

#artificialintelligence

For an introduction to statistics, this tutorial with real-life examples is the way to go. The notebooks of this tutorial will introduce you to concepts like mean, median, standard deviation, and the basics of topics such as hypothesis testing and probability distributions. A fine way to start your stats learning, since it is inspired by the books "Think Bayes" and "Think Stats", which are two top recommendations that will come back below! If you're looking for books, you can try out this free book on computational statistics in Python, which not only contains an introduction to programming with Python, but also treats topics such as Markov Chain Monte Carlo, the Expectation-Maximization (EM) algorithm, resampling methods, and much more. Or you can buy this book by Thomas Haslwanter for a general introduction to common statistical tests, linear regression analysis and topics from survival analysis and Bayesian statistics. Note that this book does take life and medical sciences as an application area. Both of the above books already introduce you to more advanced statistics topics with Python too, as you can see. If you're a fan of videos, you should consider watching this tutorial on statistical data analysis with SciPy with Christopher Fonnesbeck, an Assistant Professor in the Department of Biostatistics at the Vanderbilt University School of Medicine.