Goto

Collaborating Authors

bayesian inference


Bayesian Meta-Learning Is All You Need

#artificialintelligence

Update: This post is part of a blog series on Meta-Learning that I'm working on. Check out part 1 and part 2. In my previous post, "Meta-Learning Is All You Need," I discussed the motivation for the meta-learning paradigm, explained the mathematical underpinning, and reviewed the three approaches to design a meta-learning algorithm (namely, black-box, optimization-based, and non-parametric). I also mentioned in the post that there are two views of the meta-learning problem: a deterministic view and a probabilistic view, according to Chelsea Finn. Note: The content of this post is primarily based on CS330's lecture 5 on Bayesian meta-learning. It is accessible to the public.


What is the Bayesian theorem?

#artificialintelligence

Bayesian is interactive representations of probabilistic interactions between a number of variables. They were designed to ease the presumption of independence in the Naïve Bayes and thus allow for the dependency of variables. The first example, assume I need to see whether God exists. Initially, I have to concur with some techniques to quantify it. Something like'in the event that God existed, at that point harmony, ought to be multiple times more probable than war'.


Top 8 Open Source Tools For Bayesian Networks

#artificialintelligence

Bayesian Network, also known as Bayes network is a probabilistic directed acyclic graphical model, which can be used for time series prediction, anomaly detection, diagnostics and more. In machine learning, the Bayesian inference is known for its robust set of tools for modelling any random variable, including the business performance indicators, the value of a regression parameter, among others. This method is also known as one of the best approaches to modelling uncertainty. In this article, we list down the top eight open-source tools for Bayesian Networks. Bayesian inference Using Gibbs Sampling or BUGS is a software package for the Bayesian analysis of statistical models by utilising the Markov chain Monte Carlo techniques.


Why you should try the Bayesian approach of A/B testing

#artificialintelligence

"Critical thinking is an active and ongoing process. It requires that we all think like Bayesians, updating our knowledge as new information comes in." ― Daniel J. Levitin, A Field Guide to Lies: Critical Thinking in the Information Age Before we delve into the intuition behind using the Bayesian approach of estimation, we need to understand a few concepts. Inferential statistics is when you infer something about a whole population based on a sample of that population, as opposed to descriptive statistics which describes something about the whole population. When it comes to inferential statistics, there are two main philosophies: frequentist inference and Bayesian inference. The frequentist approach is known to be the more traditional approach to statistical inference, and thus studied more in most statistics courses (especially introductory courses). However, many would argue that the Bayesian approach is much closer to the way humans naturally perceive probability.


13 Algorithms and 4 Learning Methods of Machine Learning

#artificialintelligence

According to the similarity of the function and form of the algorithm, we can classify the algorithm, such as tree-based algorithm, neural network-based algorithm, and so on. Of course, the scope of machine learning is very large, and it is difficult for some algorithms to be clearly classified into a certain category. Regression algorithm is a type of algorithm that tries to explore the relationship between variables by using a measure of error. Regression algorithm is a powerful tool for statistical machine learning. In the field of machine learning, when people talk about regression, sometimes they refer to a type of problem and sometimes a type of algorithm.


Data-Informed Decomposition for Localized Uncertainty Quantification of Dynamical Systems

#artificialintelligence

In such problems, it is the case that the smallest length-scale of the systems dynamics controls the numerical resolution required to effectively resolve the embedded physics. In practice however, high numerical resolutions is only required in a confined region of the system where fast dynamics or localized material variability are exhibited, whereas a coarser discretization can be sufficient in the rest majority of the system. To this end, a unified computational scheme with uniform spatio-temporal resolutions for uncertainty quantification can be very computationally demanding. Partitioning the complex dynamical system into smaller easier-to-solve problems based of the localized dynamics and material variability can reduce the overall computational cost. However, identifying the region of interest for high-resolution and intensive uncertainty quantification can be a problem dependent.


Would Bayesian approaches improve COVID-19 forecasts?

#artificialintelligence

Modelling for the pandemic has shown that this debate should still be front and center. The frequentists are mostly in the spotlight advising world leaders. If you listen close you will hear a common refrain'we just need more data.' This is, of course, the age-old problem of statistical significance. However, today, we aren't in a harmless lab study, these data are only realized through death.


Why Deep Learning Ensembles Outperform Bayesian Neural Networks

#artificialintelligence

Recently I came across an interesting Paper named, "Deep Ensembles: A Loss Landscape Perspective" by a Laxshminarayan et al.In this article, I will break down the paper, summarise it's findings and delve into some of the techniques and strategies they used that will be useful for delving into understanding models and their learning process. It will also go over some possible extensions to the paper. You can also find my annotations on the paper down below. The authors conjectured (correctly) that Deep Ensembles (an ensemble of Deep learning models) outperform Bayesian Neural Networks because "popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space." In simple words, when running a Bayesian Network at a single initialization it will reach one of the peaks and stop.


A Bayesian model for a simulated meta-analysis

#artificialintelligence

There are multiple ways to estimate a Stan model in R, but I choose to build the Stan code directly rather than using the brms or rstanarm packages. In the Stan code, we need to define the data structure, specify the parameters, specify any transformed parameters (which are just a function of the parameters), and then build the model – which includes laying out the prior distributions as well as the likelihood. In this case, the model is slightly different from what was presented in the context of a mixed effects model. The key difference is that there are prior distributions on \(\Delta\) and \(\tau\), introducing an additional level of uncertainty into the estimate. This added measure of uncertainty is a strength of the Bayesian approach.


Bayesian Inference: The Maximum Entropy Principle

#artificialintelligence

In this article, I will explain what the maximum entropy principle is, how to apply it and why it's useful in the context of Bayesian inference. The code to reproduce the results and figures can be found in this notebook. The maximum entropy principle is a method to create probability distributions that is most consistent with a given set of assumptions and nothing more. The rest of the article will explain what this means. First, we need to a way to measure the uncertainty in a probability distribution.