Goto

Collaborating Authors

 Belief Revision


Simultaneous Perception-Action Design via Invariant Finite Belief Sets

arXiv.org Artificial Intelligence

Although perception is an increasingly dominant portion of the overall computational cost for autonomous systems, only a fraction of the information perceived is likely to be relevant to the current task. To alleviate these perception costs, we develop a novel simultaneous perception-action design framework wherein an agent senses only the task-relevant information. This formulation differs from that of a partially observable Markov decision process, since the agent is free to synthesize not only its policy for action selection but also its belief-dependent observation function. The method enables the agent to balance its perception costs with those incurred by operating in its environment. To obtain a computationally tractable solution, we approximate the value function using a novel method of invariant finite belief sets, wherein the agent acts exclusively on a finite subset of the continuous belief space. We solve the approximate problem through value iteration in which a linear program is solved individually for each belief state in the set, in each iteration. Finally, we prove that the value functions, under an assumption on their structure, converge to their continuous state-space values as the sample density increases.


Situated Conditional Reasoning

arXiv.org Artificial Intelligence

Conditionals are useful for modelling, but are not always sufficiently expressive for capturing information accurately. In this paper we make the case for a form of conditional that is situation-based. These conditionals are more expressive than classical conditionals, are general enough to be used in several application domains, and are able to distinguish, for example, between expectations and counterfactuals. Formally, they are shown to generalise the conditional setting in the style of Kraus, Lehmann, and Magidor. We show that situation-based conditionals can be described in terms of a set of rationality postulates. We then propose an intuitive semantics for these conditionals, and present a representation result which shows that our semantic construction corresponds exactly to the description in terms of postulates. With the semantics in place, we proceed to define a form of entailment for situated conditional knowledge bases, which we refer to as minimal closure. It is reminiscent of and, indeed, inspired by, the version of entailment for propositional conditional knowledge bases known as rational closure. Finally, we proceed to show that it is possible to reduce the computation of minimal closure to a series of propositional entailment and satisfiability checks. While this is also the case for rational closure, it is somewhat surprising that the result carries over to minimal closure.


Forgetting Formulas and Signature Elements in Epistemic States

arXiv.org Artificial Intelligence

Delgrande's knowledge level account of forgetting provides a general approach to forgetting syntax elements from sets of formulas with links to many other forgetting operations, in particular, to Boole's variable elimination. On the other hand, marginalisation of epistemic states is a specific approach to actively reduce signatures in more complex semantic frameworks, also aiming at forgetting atoms that is very well known from probability theory. In this paper, we bring these two perspectives of forgetting together by showing that marginalisation can be considered as an extension of Delgrande's approach to the level of epistemic states. More precisely, we generalize Delgrande's axioms of forgetting to forgetting in epistemic states, and show that marginalisation is the most specific and informative forgetting operator that satisfies these axioms. Moreover, we elaborate suitable phrasings of Delgrande's concept of forgetting for formulas by transferring the basic ideas of the axioms to forgetting formulas from epistemic states. However, here we show that this results in trivial approaches to forgetting formulas. This finding supports the claim that forgetting syntax elements is essentially different from belief contraction, as e.g. axiomatized in the AGM belief change framework.


On Limited Non-Prioritised Belief Revision Operators with Dynamic Scope

arXiv.org Artificial Intelligence

The research on non-prioritized revision studies revision operators which do not accept all new beliefs. In this paper, we contribute to this line of research by introducing the concept of dynamic-limited revision, which are revisions expressible by a total preorder over a limited set of worlds. For a belief change operator, we consider the scope, which consists of those beliefs which yield success of revision. We show that for each set satisfying single sentence closure and disjunction completeness there exists a dynamic-limited revision having the union of this set with the beliefs set as scope. We investigate iteration postulates for belief and scope dynamics and characterise them for dynamic-limited revision. As an application, we employ dynamic-limited revision to studying belief revision in the context of so-called inherent beliefs, which are beliefs globally accepted by the agent. This leads to revision operators which we call inherence-limited. We present a representation theorem for inherence-limited revision, and we compare these operators and dynamic-limited revision with the closely related credible-limited revision operators.


Belief Propagation as Diffusion

arXiv.org Artificial Intelligence

Message-passing algorithms such as belief propagation (BP) are parallel computing schemes that try to estimate the marginals of a high dimensional probability distribution. They are used in various areas involving the statistics of a large number of interacting random variables, such as computational thermodynamics [5, 10], artificial intelligence [11, 21, 15], computer vision [18] and communications processing [3, 4]. We have shown the existence of a non-linear correspondence between BP algorithms and discrete integrators of a new form of continuous-time diffusion equations on belief networks [13, 14].


Structured World Belief for Reinforcement Learning in POMDP

arXiv.org Artificial Intelligence

Object-centric world models provide structured representation of the scene and can be an important backbone in reinforcement learning and planning. However, existing approaches suffer in partially-observable environments due to the lack of belief states. In this paper, we propose Structured World Belief, a model for learning and inference of object-centric belief states. Inferred by Sequential Monte Carlo (SMC), our belief states provide multiple object-centric scene hypotheses. To synergize the benefits of SMC particles with object representations, we also propose a new object-centric dynamics model that considers the inductive bias of object permanence. This enables tracking of object states even when they are invisible for a long time. To further facilitate object tracking in this regime, we allow our model to attend flexibly to any spatial location in the image which was restricted in previous models. In experiments, we show that object-centric belief provides a more accurate and robust performance for filtering and generation. Furthermore, we show the efficacy of structured world belief in improving the performance of reinforcement learning, planning and supervised reasoning.


Knowledge from Probability

arXiv.org Artificial Intelligence

We give a probabilistic analysis of inductive knowledge and belief and explore its predictions concerning knowledge about the future, about laws of nature, and about the values of inexactly measured quantities. The analysis combines a theory of knowledge and belief formulated in terms of relations of comparative normality with a probabilistic reduction of those relations. It predicts that only highly probable propositions are believed, and that many widely held principles of belief-revision fail. How can we have knowledge that goes beyond what we have observed - knowledge about the future, or about lawful regularities, or about the distal causes of the readings of our scientific instruments? Many philosophers think we can't. Nelson Goodman, for example, disparagingly writes that "obviously the genuine problem [of induction] cannot be one of attaining unattainable knowledge or of accounting for knowledge that we do not in fact have" [20, p. 62]. Such philosophers typically hold that the best we can do when it comes to inductive hypotheses is to assign them high probabilities. Here we argue that such pessimism is misplaced.


"Unconditional Belief in Heat," by Anna Journey

The New Yorker

I would've stabbed the man's hand had he not jerked it away--this is what I usually say toward the end of the story. I've told for almost twenty years, I'm a junior in college towelling my wet hair as I walk from my bathroom through the hall, headed to my bedroom, at two in the morning. I see you, motherfucker, and the hand jerks back. When I call 911 and reach, incredibly, a busy signal, I phone Ed instead, who will drive over, remove his old A.C. unit, take it to his new place. I would've stabbed the hand that tried to steal my A.C.


Streaming Belief Propagation for Community Detection

arXiv.org Machine Learning

The community detection problem requires to cluster the nodes of a network into a small number of well-connected "communities". There has been substantial recent progress in characterizing the fundamental statistical limits of community detection under simple stochastic block models. However, in real-world applications, the network structure is typically dynamic, with nodes that join over time. In this setting, we would like a detection algorithm to perform only a limited number of updates at each node arrival. While standard voting approaches satisfy this constraint, it is unclear whether they exploit the network information optimally. We introduce a simple model for networks growing over time which we refer to as streaming stochastic block model (StSBM). Within this model, we prove that voting algorithms have fundamental limitations. We also develop a streaming belief-propagation (StreamBP) approach, for which we prove optimality in certain regimes. We validate our theoretical findings on synthetic and real data.


Matrix completion based on Gaussian belief propagation

arXiv.org Machine Learning

We develop a message-passing algorithm for noisy matrix completion problems based on matrix factorization. The algorithm is derived by approximating message distributions of belief propagation with Gaussian distributions that share the same first and second moments. We also derive a memory-friendly version of the proposed algorithm by applying a perturbation treatment commonly used in the literature of approximate message passing. In addition, a damping technique, which is demonstrated to be crucial for optimal performance, is introduced without computational strain, and the relationship to the message-passing version of alternating least squares, a method reported to be optimal in certain settings, is discussed. Experiments on synthetic datasets show that while the proposed algorithm quantitatively exhibits almost the same performance under settings where the earlier algorithm is optimal, it is advantageous when the observed datasets are corrupted by non-Gaussian noise. Experiments on real-world datasets also emphasize the performance differences between the two algorithms.