Naive probability

Gyenis, Zalan, Kornai, Andras

arXiv.org Artificial Intelligence 

Historically, the theory of probability emerged from the efforts of Pascal and Fermat in the 1650s to solve problems posed by a gambler, Chevalier de Méré (Rényi, 1972; Devlin, 2008), and reached its current form in Kolmogorov, 1933. Remarkably, not even highly experienced gamblers can extract high precision probability estimates from observed data: one of de Méré's questions concerned comparing the probabilities of getting at least one 6 in four rolls of one die (p 0.5177) and getting at least one double-6 in 24 throws of a pair of dice (p 0.4914). Four decades later, Samuel Pepys is asking Newton to discern the difference between at least two 6s when 12 dice are rolled (p 0.6187) and at least 3 6s when 18 dice are rolled (p 0.5973). In this paper we make this phenomenon, the very limited ability of people to deal with probabilities, the focal point of our inquiry. These limitations, we will argue, go beyond the well understood limits of numerosity (Dehaene, 1997), and touch upon areas such as cognitive limits of deduction (Kracht, 2011) and default inheritance (Etherington, 1987). We will offer a model of the naive/commonsensical theory of probability. In Section 2 we discuss likeliness, which we take to be a valuation of propositions on a discrete (seven-point) scale. In Section 3 we turn to the inference mechanism supported by the naive theory, akin to Jeffreys-style probability updates. In Section 4 we briefly sketch the background theory and discuss what we take to be the central concern, learnability.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found