Automating Model Comparison in Factor Graphs

van Erp, Bart, Nuijten, Wouter W. L., van de Laar, Thijs, de Vries, Bert

arXiv.org Artificial Intelligence 

The famous aphorism of George Box states: "all models are wrong, but some are useful" [1]. It is the task of statisticians and data analysts to find a model which is most useful for a given problem. The build, compute, critique and repeat cycle [2], also known as Box's loop [3], is an iterative approach for finding the most useful model. Any efforts in shortening this design cycle increase the chances of developing more useful models, which in turn might yield more reliable predictions, more profitable returns or more efficient operations for the problem at hand. In this paper we choose to adopt the Bayesian formalism and therefore we will specify all tasks in Box's loop as principled probabilistic inference tasks. In addition to the well-known parameter and state inference tasks, the critique step in the design cycle is also phrased as an inference task, known as Bayesian model comparison, which automatically embodies Occam's razor [4, Ch. 28.1]. Opposed to just selecting a single model in the critique step, for different models we better quantify our confidence about which model is best, especially when data is limited [5, Ch. 18.5.1]. The uncertainty arising from prior beliefs p(m) over a set of models m and limited observations can be naturally included through the use of Bayes' theorem

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found