Goto

Collaborating Authors

Belief Revision


Parsimonious Inference

arXiv.org Machine Learning

Bayesian inference provides a uniquely rigorous approach to obtain principled justification for uncertainty in predictions, yet it is difficult to articulate suitably general prior belief in the machine learning context, where computational architectures are pure abstractions subject to frequent modifications by practitioners attempting to improve results. Parsimonious inference is an information-theoretic formulation of inference over arbitrary architectures that formalizes Occam's Razor; we prefer simple and sufficient explanations. Our universal hyperprior assigns plausibility to prior descriptions, encoded as sequences of symbols, by expanding on the core relationships between program length, Kolmogorov complexity, and Solomonoff's algorithmic probability. We then cast learning as information minimization over our composite change in belief when an architecture is specified, training data are observed, and model parameters are inferred. By distinguishing model complexity from prediction information, our framework also quantifies the phenomenon of memorization. Although our theory is general, it is most critical when datasets are limited, e.g. small or skewed. We develop novel algorithms for polynomial regression and random forests that are suitable for such data, as demonstrated by our experiments. Our approaches combine efficient encodings with prudent sampling strategies to construct predictive ensembles without cross-validation, thus addressing a fundamental challenge in how to efficiently obtain predictions from data.


Inferring Agents Preferences as Priors for Probabilistic Goal Recognition

arXiv.org Artificial Intelligence

Recent approaches to goal recognition have leveraged planning landmarks to achieve high-accuracy with low runtime cost. These approaches, however, lack a probabilistic interpretation. Furthermore, while most probabilistic models to goal recognition assume that the recognizer has access to a prior probability representing, for example, an agent's preferences, virtually no goal recognition approach actually uses the prior in practice, simply assuming a uniform prior. In this paper, we provide a model to both extend landmark-based goal recognition with a probabilistic interpretation and allow the estimation of such prior probability and its usage to compute posterior probabilities after repeated interactions of observed agents. We empirically show that our model can not only recognize goals effectively but also successfully infer the correct prior probability distribution representing an agent's preferences.


iX-BSP: Incremental Belief Space Planning

arXiv.org Artificial Intelligence

Deciding what's next? is a fundamental problem in robotics and Artificial Intelligence. Under belief space planning (BSP), in a partially observable setting, it involves calculating the expected accumulated belief-dependent reward, where the expectation is with respect to all future measurements. Since solving this general un-approximated problem quickly becomes intractable, state of the art approaches turn to approximations while still calculating planning sessions from scratch. In this work we propose a novel paradigm, Incremental BSP (iX-BSP), based on the key insight that calculations across planning sessions are similar in nature and can be appropriately re-used. We calculate the expectation incrementally by utilizing Multiple Importance Sampling techniques for selective re-sampling and re-use of measurement from previous planning sessions. The formulation of our approach considers general distributions and accounts for data association aspects. We demonstrate how iX-BSP could benefit existing approximations of the general problem, introducing iML-BSP, which re-uses calculations across planning sessions under the common Maximum Likelihood assumption. We evaluate both methods and demonstrate a substantial reduction in computation time while statistically preserving accuracy. The evaluation includes both simulation and real-world experiments considering autonomous vision-based navigation and SLAM. As a further contribution, we introduce to iX-BSP the non-integral wildfire approximation, allowing one to trade accuracy for computational performance by averting from updating re-used beliefs when they are "close enough". We evaluate iX-BSP under wildfire demonstrating a substantial reduction in computation time while controlling the accuracy sacrifice. We also provide analytical and empirical bounds of the effect wildfire holds over the objective value.


A maximum entropy model of bounded rational decision-making with prior beliefs and market feedback

arXiv.org Artificial Intelligence

Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack-Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the \textit{homo economicus} agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least-biased decisions, based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making, in the presence of market feedback. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation (and analysis) of past beliefs and utility maximisation behaviour of the agent.


A Qualitative Theory of Cognitive Attitudes and their Change

arXiv.org Artificial Intelligence

Since the seminal work of Hintikka on epistemic logic [28], of Von Wright on the logic of preference [55, 56] and of Cohen & Levesque on the logic of intention [19], many formal logics for reasoning about cognitive attitudes of agents such as knowledge and belief [24], preference [32, 48], desire [23], intention [44, 30] and their combination [38, 54] have been proposed. Generally speaking, these logics are nothing but formal models of rational agency relying on the idea that an agent endowed with cognitive attitudes makes decisions on the basis of what she believes and of what she desires or prefers. The idea of describing rational agents in terms of their epistemic and motivational attitudes is something that these logics share with classical decision theory and game theory. Classical decision theory and game theory provide a quantitative account of individual and strategic decision-making by assuming that agents' beliefs and desires can be respectively modeled by subjective probabilities and utilities. Qualitative approaches to individual and strategic decision-making have been proposed in AI [16, 22] to characterize criteria that a rational agent should adopt for making decisions when she cannot build a probability distribution over the set of possible events and her preference over the set of possible outcomes cannot be expressed by a utility function but only by a qualitative ordering over the outcomes.


Data Obsolescence Detection in the Light of Newly Acquired Valid Observations

arXiv.org Artificial Intelligence

The information describing the conditions of a system or a person is constantly evolving and may become obsolete and contradict other information. A database, therefore, must be consistently updated upon the acquisition of new valid observations that contradict obsolete ones contained in the database. In this paper, we propose a novel approach for dealing with the information obsolescence problem. Our approach aims to detect, in real-time, contradictions between observations and then identify the obsolete ones, given a representation model. Since we work within an uncertain environment characterized by the lack of information, we choose to use a Bayesian network as our representation model and propose a new approximate concept, $\epsilon$-Contradiction. The new concept is parameterised by a confidence level of having a contradiction in a set of observations. We propose a polynomial-time algorithm for detecting obsolete information. We show that the resulting obsolete information is better represented by an AND-OR tree than a simple set of observations. Finally, we demonstrate the effectiveness of our approach on a real elderly fall-prevention database and showcase how this tree can be used to give reliable recommendations to doctors. Our experiments give systematically and substantially very good results.


Merging with unknown reliability

arXiv.org Artificial Intelligence

Such a scenario occurs, but not especially often. Two identical temperature sensors produce readings that are equally likely to be close to the actual value, but a difference in made, age, or position changes their reliability. Two experts hardly have the very same knowledge, experience and ability. The reliability of two databases on a certain area may depend on factors that are unknown when merging them. Merging under equal and unequal reliability are two scenarios, but a third exists: unknown reliability. Most previous work in belief merging is about the first [41, 43, 13, 22, 36, 31, 23]; some is about the second [53, 42, 12, 35]; this one is about the third. The difference between equal and unknown reliability is clear when its implications on some examples are shown.


Dynamic Preference Logic meets Iterated Belief Change: Representation Results and Postulates Characterization

arXiv.org Artificial Intelligence

AGM's belief revision is one of the main paradigms in the study of belief change operations. Recently, several logics for belief and information change have been proposed in the literature and used to encode belief change operations in rich and expressive semantic frameworks. While the connections of AGM-like operations and their encoding in dynamic doxastic logics have been studied before by the work of Segerberg, most works on the area of Dynamic Epistemic Logics (DEL) have not, to our knowledge, attempted to use those logics as tools to investigate mathematical properties of belief change operators. This work investigates how Dynamic Preference Logic, a logic in the DEL family, can be used to study properties of dynamic belief change operators, focusing on well-known postulates of iterated belief change.


State Estimation of Power Flows for Smart Grids via Belief Propagation

arXiv.org Artificial Intelligence

Belief propagation is an algorithm that is known from statistical physics and computer science. It provides an efficient way of calculating marginals that involve large sums of products which are efficiently rearranged into nested products of sums to approximate the marginals. It allows a reliable estimation of the state and its variance of power grids that is needed for the control and forecast of power grid management. At prototypical examples of IEEE-grids we show that belief propagation not only scales linearly with the grid size for the state estimation itself, but also facilitates and accelerates the retrieval of missing data and allows an optimized positioning of measurement units. Based on belief propagation, we give a criterion for how to assess whether other algorithms, using only local information, are adequate for state estimation for a given grid. We also demonstrate how belief propagation can be utilized for coarse-graining power grids towards representations that reduce the computational effort when the coarse-grained version is integrated into a larger grid. It provides a criterion for partitioning power grids into areas in order to minimize the error of flow estimates between different areas.


On the Relationship Between KR Approaches for Explainable Planning

arXiv.org Artificial Intelligence

In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.