Goto

Collaborating Authors

Belief Revision


Knowledge from Probability

arXiv.org Artificial Intelligence

We give a probabilistic analysis of inductive knowledge and belief and explore its predictions concerning knowledge about the future, about laws of nature, and about the values of inexactly measured quantities. The analysis combines a theory of knowledge and belief formulated in terms of relations of comparative normality with a probabilistic reduction of those relations. It predicts that only highly probable propositions are believed, and that many widely held principles of belief-revision fail. How can we have knowledge that goes beyond what we have observed - knowledge about the future, or about lawful regularities, or about the distal causes of the readings of our scientific instruments? Many philosophers think we can't. Nelson Goodman, for example, disparagingly writes that "obviously the genuine problem [of induction] cannot be one of attaining unattainable knowledge or of accounting for knowledge that we do not in fact have" [20, p. 62]. Such philosophers typically hold that the best we can do when it comes to inductive hypotheses is to assign them high probabilities. Here we argue that such pessimism is misplaced.


"Unconditional Belief in Heat," by Anna Journey

The New Yorker

I would've stabbed the man's hand had he not jerked it away--this is what I usually say toward the end of the story. I've told for almost twenty years, I'm a junior in college towelling my wet hair as I walk from my bathroom through the hall, headed to my bedroom, at two in the morning. I see you, motherfucker, and the hand jerks back. When I call 911 and reach, incredibly, a busy signal, I phone Ed instead, who will drive over, remove his old A.C. unit, take it to his new place. I would've stabbed the hand that tried to steal my A.C.


Streaming Belief Propagation for Community Detection

arXiv.org Machine Learning

The community detection problem requires to cluster the nodes of a network into a small number of well-connected "communities". There has been substantial recent progress in characterizing the fundamental statistical limits of community detection under simple stochastic block models. However, in real-world applications, the network structure is typically dynamic, with nodes that join over time. In this setting, we would like a detection algorithm to perform only a limited number of updates at each node arrival. While standard voting approaches satisfy this constraint, it is unclear whether they exploit the network information optimally. We introduce a simple model for networks growing over time which we refer to as streaming stochastic block model (StSBM). Within this model, we prove that voting algorithms have fundamental limitations. We also develop a streaming belief-propagation (StreamBP) approach, for which we prove optimality in certain regimes. We validate our theoretical findings on synthetic and real data.


Efficient and accurate group testing via Belief Propagation: an empirical study

arXiv.org Artificial Intelligence

The group testing problem asks for efficient pooling schemes and algorithms that allow to screen moderately large numbers of samples for rare infections. The goal is to accurately identify the infected samples while conducting the least possible number of tests. Exploring the use of techniques centred around the Belief Propagation message passing algorithm, we suggest a new test design that significantly increases the accuracy of the results. The new design comes with Belief Propagation as an efficient inference algorithm. Aiming for results on practical rather than asymptotic problem sizes, we conduct an experimental study.


Matrix completion based on Gaussian belief propagation

arXiv.org Machine Learning

We develop a message-passing algorithm for noisy matrix completion problems based on matrix factorization. The algorithm is derived by approximating message distributions of belief propagation with Gaussian distributions that share the same first and second moments. We also derive a memory-friendly version of the proposed algorithm by applying a perturbation treatment commonly used in the literature of approximate message passing. In addition, a damping technique, which is demonstrated to be crucial for optimal performance, is introduced without computational strain, and the relationship to the message-passing version of alternating least squares, a method reported to be optimal in certain settings, is discussed. Experiments on synthetic datasets show that while the proposed algorithm quantitatively exhibits almost the same performance under settings where the earlier algorithm is optimal, it is advantageous when the observed datasets are corrupted by non-Gaussian noise. Experiments on real-world datasets also emphasize the performance differences between the two algorithms.


A General Katsuno-Mendelzon-Style Characterization of AGM Belief Base Revision for Arbitrary Monotonic Logics

arXiv.org Artificial Intelligence

The AGM postulates by Alchourrón, Gärdenfors, In this paper, we consider (multiple) revision of finite bases and Makinson continue to represent a cornerstone in arbitrary monotonic logics, refining and generalizing the in research related to belief change. We generalize popular approach by Katsuno and Mendelzon [12] (KM) for the approach of Katsuno and Mendelzon (KM) propositional belief base revision. KM start out from finite for characterizing AGM base revision from propositional belief bases, assigning to each a total preorder on the interpretations, logic to the setting of (multiple) base revision which expresses - intuitively speaking - a degree in arbitrary monotonic logics.


High-dimensional near-optimal experiment design for drug discovery via Bayesian sparse sampling

arXiv.org Machine Learning

We study the problem of performing automated experiment design for drug screening through Bayesian inference and optimisation. In particular, we compare and contrast the behaviour of linear-Gaussian models and Gaussian processes, when used in conjunction with upper confidence bound algorithms, Thompson sampling, or bounded horizon tree search. We show that non-myopic sophisticated exploration techniques using sparse tree search have a distinct advantage over methods such as Thompson sampling or upper confidence bounds in this setting. We demonstrate the significant superiority of the approach over existing and synthetic datasets of drug toxicity.


A geometric approach to conditioning belief functions

arXiv.org Artificial Intelligence

Conditioning is crucial in applied science when inference involving time series is involved. Belief calculus is an effective way of handling such inference in the presence of epistemic uncertainty -- unfortunately, different approaches to conditioning in the belief function framework have been proposed in the past, leaving the matter somewhat unsettled. Inspired by the geometric approach to uncertainty, in this paper we propose an approach to the conditioning of belief functions based on geometrically projecting them onto the simplex associated with the conditioning event in the space of all belief functions. We show here that such a geometric approach to conditioning often produces simple results with straightforward interpretations in terms of degrees of belief. This raises the question of whether classical approaches, such as for instance Dempster's conditioning, can also be reduced to some form of distance minimisation in a suitable space. The study of families of combination rules generated by (geometric) conditioning rules appears to be the natural prosecution of the presented research.


Uncertainty measures: The big picture

arXiv.org Artificial Intelligence

Probability theory is far from being the most general mathematical theory of uncertainty. A number of arguments point at its inability to describe second-order ('Knightian') uncertainty. In response, a wide array of theories of uncertainty have been proposed, many of them generalisations of classical probability. As we show here, such frameworks can be organised into clusters sharing a common rationale, exhibit complex links, and are characterised by different levels of generality. Our goal is a critical appraisal of the current landscape in uncertainty theory.


On Mixed Iterated Revisions

arXiv.org Artificial Intelligence

Several forms of iterable belief change exist, differing in the kind of change and its strength: some operators introduce formulae, others remove them; some add formulae unconditionally, others only as additions to the previous beliefs; some only relative to the current situation, others in all possible cases. A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs. The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal. In turn, these three can be expressed in terms of lexicographic revision at the cost of restructuring the sequence. This restructuring needs not to be done explicitly: an algorithm that works on the original sequence is shown. The complexity of mixed sequences of belief change operators is also analyzed. Most of them require only a polynomial number of calls to a satisfiability checker, some are even easier.