bayesianism
The Problem of the Priors, or Posteriors?
The problem of the priors is well known: it concerns the challenge of identifying norms that govern one's prior credences. I argue that a key to addressing this problem lies in considering what I call the problem of the posteriors -- the challenge of identifying norms that directly govern one's posterior credences, which then induce constraints on the priors via the diachronic requirement of conditionalization. This forward-looking approach can be summarized as: Think ahead, work backward. Although this idea can be traced to Freedman (1963), Carnap (1963), and Shimony (1970), it has received little attention in philosophy. In this paper, I initiate a systematic defense of forward-looking Bayesianism, addressing potential objections from more traditional views (both subjectivist and objectivist) and arguing for its advantages. In particular, I develop a specific approach to forward-looking Bayesianism -- one that treats the convergence of posterior credences to the truth as a fundamental rather than derived normative requirement. This approach, called convergentist Bayesianism, is argued to be crucial for a Bayesian foundation of Ockham's razor and related inference methods in statistics and machine learning.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
Statistical inference as Green's functions
Lee, Hyun Keun, Kwon, Chulan, Kim, Yong Woon
Statistical inference from data is a foundational task in science. Recently, it has received growing attention for its central role in inference systems of primary interest in data sciences and machine learning. However, the understanding of statistical inference is not that solid while remains as a matter of subjective belief or as the routine procedures once claimed objective. We here show that there is an objective description of statistical inference for long sequence of exchangeable binary random variables, the prototypal stochasticity in theories and applications. A linear differential equation is derived from the identity known as de Finetti's representation theorem, and it turns out that statistical inference is given by the Green's functions. Our finding is an answer to the normative issue of science that pursues the objectivity based on data, and its significance will be far-reaching in most pure and applied fields.
- North America > United States > New York (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- (6 more...)
Unforeseen Evidence
In this note, I propose a normative updating rule, extended Bayesianism, for the incorporation of probabilistic information arising from the process of becoming more aware. Extended Bayesianism generalizes standard Bayesian updating to allow the posterior to reside on richer probability space than the prior. I then provide an observable criterion on prior and posterior beliefs such that they were consistent with extended Bayesianism. Key words: extended Bayesianism; reverse Bayesianism; conditional expectations. Conditioning on Unforeseen Evidence Decision maker's (DM's) who are unaware, cannot conceive of, nor articulate, the decision relevant contingencies they are unaware of.
Plausibility and probability in deductive reasoning
We consider the problem of rational uncertainty about unproven mathematical statements, remarked on by G\"odel and others. Using Bayesian-inspired arguments we build a normative model of fair bets under deductive uncertainty which draws from both probability and the theory of algorithms. We comment on connections to Zeilberger's notion of "semi-rigorous proofs", particularly that inherent subjectivity would be present. We also discuss a financial view with models of arbitrage where traders have limited computational resources.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Virginia (0.04)
- (4 more...)
- Information Technology > Game Theory (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
What's New in Deep Learning Research: Teaching Computers How to Code
Writing programs that can create programs have been an elusive goal of artificial intelligence(AI) research for many years. As a matter of fact, the idea that AI agents can create their own programs if often seem as one of the differentiators of general AI vs. narrow AI. So important is this goal, that AI researchers have created a specific area of research known as Program Synthesis that focuses on addressing those challenges. The idea behind program synthesis is to create AI agents that can generate programs that match a given specification. We often use primitive versions of this technique when we take advantage of, for instance, the Flash Fill feature in Microsoft Excel.
Some Problems for Convex Bayesians
Kyburg, Henry E. Jr., Pittarelli, Michael
The leading contender is Levi's When the set contains only one function, convex conditionalization and E-admissibility reduce to their strict Bayesian counterparts. Thus, with respect to decision making and representing and updating uncertainty, convex Bay· esianism includes strict Bayesianism as a special case. There are natural constraints on probability judg-- ments that cannot be represented by convex sets of classical probability functions. Working with the convex hull of a nonconvex set of probability func-- tions may result in unnecessary indecisiveness. This is not a convex set. Judgments of irrelevance (conditional irrelevance), that is, probabilistic independence (conditional independence}, are often made, are natural to make, can be made reliably, and provide well-known computational advantages [Pearl, 1988].
- North America > United States > New Jersey (0.05)
- North America > United States > California (0.05)
- North America > United States > New York > Oneida County > Utica (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
Decision Principles to justify Carnap's Updating Method and to Suggest Corrections of Probability Judgments (Invited Talks)
This paper uses decision-theoretic principles to obtain new insights into the assessment and updating of probabilities. First, a new foundation of Bayesianism is given. It does not require infinite atomless uncertainties as did Savage s classical result, AND can therefore be applied TO ANY finite Bayesian network.It neither requires linear utility AS did de Finetti s classical result, AND r ntherefore allows FOR the empirically AND normatively desirable risk r naversion.Finally, BY identifying AND fixing utility IN an elementary r nmanner, our result can readily be applied TO identify methods OF r nprobability updating.Thus, a decision - theoretic foundation IS given r nto the computationally efficient method OF inductive reasoning r ndeveloped BY Rudolf Carnap.Finally, recent empirical findings ON r nprobability assessments are discussed.It leads TO suggestions FOR r ncorrecting biases IN probability assessments, AND FOR an alternative r nto the Dempster - Shafer belief functions that avoids the reduction TO r ndegeneracy after multiple updatings.r n
- North America > Canada > Ontario > Toronto (0.15)
- North America > United States > New York (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- (4 more...)