Data scientist Stefano Cosentino observed in a post that the Bayesian approach leans more towards the distributions associated with each parameter. For instance, he writes that the two parameters depicted below, as shown by the Gaussian curves after a trained Bayesian network has converged. Hence the Bayesian approach, where the parameters are unknown quantities can be considered as random variables. University of Buffalo's paper defines the Bayesian approach to uncertainty, which treats all uncertain quantities as random variables and uses the laws of probability to manipulate those uncertain quantities. Hence, the right Bayesian approach integrates over all uncertain quantities rather than optimise them, states the paper.
Complex decision making scenarios require maintaining high level of concentration and acquiring knowledge about the context of the task in hand. Focus of attention is not only affected by contextual factors but also by the way operators interact with the information. Conversely, determining optimal ways to interact with this information can augment operators’ cognition. However, challenges exist for determining efficient mathematical frameworks and sound metrics to infer, reason and assess the level of attention during spatio-temporal complex problem solving in hybrid human-machine systems. This paper proposes a computational framework based on a Bayesian approach (BAN) to infer users’ focus of attention based on physical expression generated from embodied interaction and further support decision-making in an unobtrusive manner. Experiments involving five interaction modalities (vision-based gesture interaction, glove-based gesture interaction, speech, feet, and body balance) were conducted to assess the proposed framework’s feasibility including the likelihood of assessed attention from enhanced BAN and task performance. Results confirm that physical expressions have a determining effect in the quality of the solutions in spatio-navigational type of problems.
I have made some progress with my work on combining independent evidence using a Bayesian approach but eschewing standard Bayesian updating. I found a neat analytical way of doing this, to a very good approximation, in cases where each estimate of a parameter corresponds to the ratio of two variables each determined with normal error, the fractional uncertainty in the numerator and denominator variables differing between the types of evidence. This seems a not uncommon situation in science, and it is a good approximation to that which exists when estimating climate sensitivity. I have had a manuscript in which I develop and test this method accepted by the Journal of Statistical Planning and Inference (for a special issue on Confidence Distributions edited by Tore Schweder and Nils Hjort). Frequentist coverage is almost exact using my analytical solution, based on combining Jeffreys' priors in quadrature, whereas Bayesian updating produces far poorer probability matching.
Today we are going to implement a Bayesian linear regression in R from scratch and use it to forecast US GDP growth. This post is based on a very informative manual from the Bank of England on Applied Bayesian Econometrics. I have translated the original Matlab code into R since its open source and widely used in data analysis/science. My main goal in this post is to try and give people a better understanding of Bayesian statistics, some of it's advantages and also some scenarios where you might want to use it. Let's take a moment to think about why we would we even want to use Bayesian techniques in the first place.