Belief Revision
iX-BSP: Incremental Belief Space Planning
Farhi, Elad I., Indelman, Vadim
Deciding what's next? is a fundamental problem in robotics and Artificial Intelligence. Under belief space planning (BSP), in a partially observable setting, it involves calculating the expected accumulated belief-dependent reward, where the expectation is with respect to all future measurements. Since solving this general un-approximated problem quickly becomes intractable, state of the art approaches turn to approximations while still calculating planning sessions from scratch. In this work we propose a novel paradigm, Incremental BSP (iX-BSP), based on the key insight that calculations across planning sessions are similar in nature and can be appropriately re-used. We calculate the expectation incrementally by utilizing Multiple Importance Sampling techniques for selective re-sampling and re-use of measurement from previous planning sessions. The formulation of our approach considers general distributions and accounts for data association aspects. We demonstrate how iX-BSP could benefit existing approximations of the general problem, introducing iML-BSP, which re-uses calculations across planning sessions under the common Maximum Likelihood assumption. We evaluate both methods and demonstrate a substantial reduction in computation time while statistically preserving accuracy. The evaluation includes both simulation and real-world experiments considering autonomous vision-based navigation and SLAM. As a further contribution, we introduce to iX-BSP the non-integral wildfire approximation, allowing one to trade accuracy for computational performance by averting from updating re-used beliefs when they are "close enough". We evaluate iX-BSP under wildfire demonstrating a substantial reduction in computation time while controlling the accuracy sacrifice. We also provide analytical and empirical bounds of the effect wildfire holds over the objective value.
A Qualitative Theory of Cognitive Attitudes and their Change
Since the seminal work of Hintikka on epistemic logic [28], of Von Wright on the logic of preference [55, 56] and of Cohen & Levesque on the logic of intention [19], many formal logics for reasoning about cognitive attitudes of agents such as knowledge and belief [24], preference [32, 48], desire [23], intention [44, 30] and their combination [38, 54] have been proposed. Generally speaking, these logics are nothing but formal models of rational agency relying on the idea that an agent endowed with cognitive attitudes makes decisions on the basis of what she believes and of what she desires or prefers. The idea of describing rational agents in terms of their epistemic and motivational attitudes is something that these logics share with classical decision theory and game theory. Classical decision theory and game theory provide a quantitative account of individual and strategic decision-making by assuming that agents' beliefs and desires can be respectively modeled by subjective probabilities and utilities. Qualitative approaches to individual and strategic decision-making have been proposed in AI [16, 22] to characterize criteria that a rational agent should adopt for making decisions when she cannot build a probability distribution over the set of possible events and her preference over the set of possible outcomes cannot be expressed by a utility function but only by a qualitative ordering over the outcomes.
Data Obsolescence Detection in the Light of Newly Acquired Valid Observations
Chaieb, Salma, Mrad, Ali Ben, Hnich, Brahim, Delcroix, Véronique
The information describing the conditions of a system or a person is constantly evolving and may become obsolete and contradict other information. A database, therefore, must be consistently updated upon the acquisition of new valid observations that contradict obsolete ones contained in the database. In this paper, we propose a novel approach for dealing with the information obsolescence problem. Our approach aims to detect, in real-time, contradictions between observations and then identify the obsolete ones, given a representation model. Since we work within an uncertain environment characterized by the lack of information, we choose to use a Bayesian network as our representation model and propose a new approximate concept, $\epsilon$-Contradiction. The new concept is parameterised by a confidence level of having a contradiction in a set of observations. We propose a polynomial-time algorithm for detecting obsolete information. We show that the resulting obsolete information is better represented by an AND-OR tree than a simple set of observations. Finally, we demonstrate the effectiveness of our approach on a real elderly fall-prevention database and showcase how this tree can be used to give reliable recommendations to doctors. Our experiments give systematically and substantially very good results.
Merging with unknown reliability
Such a scenario occurs, but not especially often. Two identical temperature sensors produce readings that are equally likely to be close to the actual value, but a difference in made, age, or position changes their reliability. Two experts hardly have the very same knowledge, experience and ability. The reliability of two databases on a certain area may depend on factors that are unknown when merging them. Merging under equal and unequal reliability are two scenarios, but a third exists: unknown reliability. Most previous work in belief merging is about the first [41, 43, 13, 22, 36, 31, 23]; some is about the second [53, 42, 12, 35]; this one is about the third. The difference between equal and unknown reliability is clear when its implications on some examples are shown.
Dynamic Preference Logic meets Iterated Belief Change: Representation Results and Postulates Characterization
Souza, Marlo, Moreira, Álvaro, Vieira, Renata
AGM's belief revision is one of the main paradigms in the study of belief change operations. Recently, several logics for belief and information change have been proposed in the literature and used to encode belief change operations in rich and expressive semantic frameworks. While the connections of AGM-like operations and their encoding in dynamic doxastic logics have been studied before by the work of Segerberg, most works on the area of Dynamic Epistemic Logics (DEL) have not, to our knowledge, attempted to use those logics as tools to investigate mathematical properties of belief change operators. This work investigates how Dynamic Preference Logic, a logic in the DEL family, can be used to study properties of dynamic belief change operators, focusing on well-known postulates of iterated belief change.
State Estimation of Power Flows for Smart Grids via Belief Propagation
Ritmeester, Tim, Meyer-Ortmanns, Hildegard
Belief propagation is an algorithm that is known from statistical physics and computer science. It provides an efficient way of calculating marginals that involve large sums of products which are efficiently rearranged into nested products of sums to approximate the marginals. It allows a reliable estimation of the state and its variance of power grids that is needed for the control and forecast of power grid management. At prototypical examples of IEEE-grids we show that belief propagation not only scales linearly with the grid size for the state estimation itself, but also facilitates and accelerates the retrieval of missing data and allows an optimized positioning of measurement units. Based on belief propagation, we give a criterion for how to assess whether other algorithms, using only local information, are adequate for state estimation for a given grid. We also demonstrate how belief propagation can be utilized for coarse-graining power grids towards representations that reduce the computational effort when the coarse-grained version is integrated into a larger grid. It provides a criterion for partitioning power grids into areas in order to minimize the error of flow estimates between different areas.
On the Relationship Between KR Approaches for Explainable Planning
Vasileiou, Stylianos Loukas, Yeoh, William, Son, Tran Cao
In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.
Bayes Meets Entailment and Prediction: Commonsense Reasoning with Non-monotonicity, Paraconsistency and Predictive Accuracy
Kido, Hiroyuki, Okamoto, Keishi
The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.
Quantum d-separation and quantum belief propagation
The goal of this paper is to generalize classical d-separation and classical Belief Propagation (BP) to the quantum realm. Classical d-separation is an essential ingredient of most of Judea Pearl's work. It is crucial to all 3 rungs of what Pearl calls the 3 rungs of Causation. So having a quantum version of d-separation and BP probably implies that most of Pearl's Bayesian networks work, including his theory of causality, can be translated in a straightforward manner to the quantum realm.