Belief Revision
A General Katsuno-Mendelzon-Style Characterization of AGM Belief Base Revision for Arbitrary Monotonic Logics
Falakh, Faiq Miftakhul, Rudolph, Sebastian, Sauerwald, Kai
The AGM postulates by Alchourrรณn, Gรคrdenfors, In this paper, we consider (multiple) revision of finite bases and Makinson continue to represent a cornerstone in arbitrary monotonic logics, refining and generalizing the in research related to belief change. We generalize popular approach by Katsuno and Mendelzon [12] (KM) for the approach of Katsuno and Mendelzon (KM) propositional belief base revision. KM start out from finite for characterizing AGM base revision from propositional belief bases, assigning to each a total preorder on the interpretations, logic to the setting of (multiple) base revision which expresses - intuitively speaking - a degree in arbitrary monotonic logics.
High-dimensional near-optimal experiment design for drug discovery via Bayesian sparse sampling
Eriksson, Hannes, Dimitrakakis, Christos, Carlsson, Lars
We study the problem of performing automated experiment design for drug screening through Bayesian inference and optimisation. In particular, we compare and contrast the behaviour of linear-Gaussian models and Gaussian processes, when used in conjunction with upper confidence bound algorithms, Thompson sampling, or bounded horizon tree search. We show that non-myopic sophisticated exploration techniques using sparse tree search have a distinct advantage over methods such as Thompson sampling or upper confidence bounds in this setting. We demonstrate the significant superiority of the approach over existing and synthetic datasets of drug toxicity.
A geometric approach to conditioning belief functions
Conditioning is crucial in applied science when inference involving time series is involved. Belief calculus is an effective way of handling such inference in the presence of epistemic uncertainty -- unfortunately, different approaches to conditioning in the belief function framework have been proposed in the past, leaving the matter somewhat unsettled. Inspired by the geometric approach to uncertainty, in this paper we propose an approach to the conditioning of belief functions based on geometrically projecting them onto the simplex associated with the conditioning event in the space of all belief functions. We show here that such a geometric approach to conditioning often produces simple results with straightforward interpretations in terms of degrees of belief. This raises the question of whether classical approaches, such as for instance Dempster's conditioning, can also be reduced to some form of distance minimisation in a suitable space. The study of families of combination rules generated by (geometric) conditioning rules appears to be the natural prosecution of the presented research.
Uncertainty measures: The big picture
Probability theory is far from being the most general mathematical theory of uncertainty. A number of arguments point at its inability to describe second-order ('Knightian') uncertainty. In response, a wide array of theories of uncertainty have been proposed, many of them generalisations of classical probability. As we show here, such frameworks can be organised into clusters sharing a common rationale, exhibit complex links, and are characterised by different levels of generality. Our goal is a critical appraisal of the current landscape in uncertainty theory.
On Mixed Iterated Revisions
Several forms of iterable belief change exist, differing in the kind of change and its strength: some operators introduce formulae, others remove them; some add formulae unconditionally, others only as additions to the previous beliefs; some only relative to the current situation, others in all possible cases. A sequence of changes may involve several of them: for example, the first step is a revision, the second a contraction and the third a refinement of the previous beliefs. The ten operators considered in this article are shown to be all reducible to three: lexicographic revision, refinement and severe withdrawal. In turn, these three can be expressed in terms of lexicographic revision at the cost of restructuring the sequence. This restructuring needs not to be done explicitly: an algorithm that works on the original sequence is shown. The complexity of mixed sequences of belief change operators is also analyzed. Most of them require only a polynomial number of calls to a satisfiability checker, some are even easier.
Deep Interpretable Models of Theory of Mind For Human-Agent Teaming
Oguntola, Ini, Hughes, Dana, Sycara, Katia
When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. To this end, we develop an interpretable modular neural framework for modeling the intentions of other observed entities. We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft, and show that incorporating interpretability can significantly increase predictive performance under the right conditions.
grASP: A Graph Based ASP-Solver and Justification System
Li, Fang, Wang, Huaduo, Gupta, Gopal
Answer set programming (ASP) is a popular nonmonotonic-logic based paradigm for knowledge representation and solving combinatorial problems. Computing the answer set of an ASP program is NP-hard in general, and researchers have been investing significant effort to speed it up. The majority of current ASP solvers employ SAT solver-like technology to find these answer sets. As a result, justification for why a literal is in the answer set is hard to produce. There are dependency graph based approaches to find answer sets, but due to the representational limitations of dependency graphs, such approaches are limited. We propose a novel dependency graph-based approach for finding answer sets in which conjunction of goals is explicitly represented as a node which allows arbitrary answer set programs to be uniformly represented. Our representation preserves causal relationships allowing for justification for each literal in the answer set to be elegantly found. Performance results from an implementation are also reported. Our work paves the way for computing answer sets without grounding a program.
Parsimonious Inference
Duersch, Jed A., Catanach, Thomas A.
Bayesian inference provides a uniquely rigorous approach to obtain principled justification for uncertainty in predictions, yet it is difficult to articulate suitably general prior belief in the machine learning context, where computational architectures are pure abstractions subject to frequent modifications by practitioners attempting to improve results. Parsimonious inference is an information-theoretic formulation of inference over arbitrary architectures that formalizes Occam's Razor; we prefer simple and sufficient explanations. Our universal hyperprior assigns plausibility to prior descriptions, encoded as sequences of symbols, by expanding on the core relationships between program length, Kolmogorov complexity, and Solomonoff's algorithmic probability. We then cast learning as information minimization over our composite change in belief when an architecture is specified, training data are observed, and model parameters are inferred. By distinguishing model complexity from prediction information, our framework also quantifies the phenomenon of memorization. Although our theory is general, it is most critical when datasets are limited, e.g. small or skewed. We develop novel algorithms for polynomial regression and random forests that are suitable for such data, as demonstrated by our experiments. Our approaches combine efficient encodings with prudent sampling strategies to construct predictive ensembles without cross-validation, thus addressing a fundamental challenge in how to efficiently obtain predictions from data.
Inferring Agents Preferences as Priors for Probabilistic Goal Recognition
Gusmรฃo, Kin Max, Pereira, Ramon Fraga, Meneguzzi, Felipe
Recent approaches to goal recognition have leveraged planning landmarks to achieve high-accuracy with low runtime cost. These approaches, however, lack a probabilistic interpretation. Furthermore, while most probabilistic models to goal recognition assume that the recognizer has access to a prior probability representing, for example, an agent's preferences, virtually no goal recognition approach actually uses the prior in practice, simply assuming a uniform prior. In this paper, we provide a model to both extend landmark-based goal recognition with a probabilistic interpretation and allow the estimation of such prior probability and its usage to compute posterior probabilities after repeated interactions of observed agents. We empirically show that our model can not only recognize goals effectively but also successfully infer the correct prior probability distribution representing an agent's preferences.
A maximum entropy model of bounded rational decision-making with prior beliefs and market feedback
Evans, Benjamin Patrick, Prokopenko, Mikhail
Bounded rationality is an important consideration stemming from the fact that agents often have limits on their processing abilities, making the assumption of perfect rationality inapplicable to many real tasks. We propose an information-theoretic approach to the inference of agent decisions under Smithian competition. The model explicitly captures the boundedness of agents (limited in their information-processing capacity) as the cost of information acquisition for expanding their prior beliefs. The expansion is measured as the Kullblack-Leibler divergence between posterior decisions and prior beliefs. When information acquisition is free, the \textit{homo economicus} agent is recovered, while in cases when information acquisition becomes costly, agents instead revert to their prior beliefs. The maximum entropy principle is used to infer least-biased decisions, based upon the notion of Smithian competition formalised within the Quantal Response Statistical Equilibrium framework. The incorporation of prior beliefs into such a framework allowed us to systematically explore the effects of prior beliefs on decision-making, in the presence of market feedback. We verified the proposed model using Australian housing market data, showing how the incorporation of prior knowledge alters the resulting agent decisions. Specifically, it allowed for the separation (and analysis) of past beliefs and utility maximisation behaviour of the agent.