Goto

Collaborating Authors

 Goodman, Jeremy


Belief Revision from Probability

arXiv.org Artificial Intelligence

In previous work ("Knowledge from Probability", TARK 2021) we develop a question-relative, probabilistic account of belief. On this account, what someone believes relative to a given question is (i) closed under entailment, (ii) sufficiently probable given their evidence, and (iii) sensitive to the relative probabilities of the answers to the question. Here we explore the implications of this account for the dynamics of belief. We show that the principles it validates are much weaker than those of orthodox theories of belief revision like AGM, but still stronger than those valid according to the popular Lockean theory of belief, which equates belief with high subjective probability. We then consider a restricted class of models, suitable for many but not all applications, and identify some further natural principles valid on this class. We conclude by arguing that the present framework compares favorably to the rival probabilistic accounts of belief developed by Leitgeb and by Lin and Kelly.


Knowledge from Probability

arXiv.org Artificial Intelligence

We give a probabilistic analysis of inductive knowledge and belief and explore its predictions concerning knowledge about the future, about laws of nature, and about the values of inexactly measured quantities. The analysis combines a theory of knowledge and belief formulated in terms of relations of comparative normality with a probabilistic reduction of those relations. It predicts that only highly probable propositions are believed, and that many widely held principles of belief-revision fail. How can we have knowledge that goes beyond what we have observed - knowledge about the future, or about lawful regularities, or about the distal causes of the readings of our scientific instruments? Many philosophers think we can't. Nelson Goodman, for example, disparagingly writes that "obviously the genuine problem [of induction] cannot be one of attaining unattainable knowledge or of accounting for knowledge that we do not in fact have" [20, p. 62]. Such philosophers typically hold that the best we can do when it comes to inductive hypotheses is to assign them high probabilities. Here we argue that such pessimism is misplaced.