Goto

Collaborating Authors

belief revision


A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

arXiv.org Artificial Intelligence

Structured belief states are crucial for user goal tracking and database query in task-oriented dialog systems. However, training belief trackers often requires expensive turn-level annotations of every user utterance. In this paper we aim at alleviating the reliance on belief state labels in building end-to-end dialog systems, by leveraging unlabeled dialog data towards semi-supervised learning. We propose a probabilistic dialog model, called the LAtent BElief State (LABES) model, where belief states are represented as discrete latent variables and jointly modeled with system responses given user inputs. Such latent variable modeling enables us to develop semi-supervised learning under the principled variational learning framework. Furthermore, we introduce LABES-S2S, which is a copy-augmented Seq2Seq model instantiation of LABES. In supervised experiments, LABES-S2S obtains strong results on three benchmark datasets of different scales. In utilizing unlabeled dialog data, semi-supervised LABES-S2S significantly outperforms both supervised-only and semi-supervised baselines. Remarkably, we can reduce the annotation demands to 50% without performance loss on MultiWOZ.


Incompatibilities Between Iterated and Relevance-Sensitive Belief Revision

Journal of Artificial Intelligence Research

The AGM paradigm for belief change, as originally introduced by Alchourron, Gärdenfors and Makinson, lacks any guidelines for the process of iterated revision. One of the most influential work addressing this problem is Darwiche and Pearl's approach (DP approach, for short), which, despite its well-documented shortcomings, remains to this date the most dominant. In this article, we make further observations on the DP approach. In particular, we prove that the DP postulates are, in a strong sense, inconsistent with Parikh's relevance-sensitive axiom (P), extending previous initial conflicts. Immediate consequences of this result are that an entire class of intuitive revision operators, which includes Dalal's operator, violates the DP postulates, as well as that the Independence postulate and Spohn's conditionalization are inconsistent with axiom (P). The whole study, essentially, indicates that two fundamental aspects of the revision process, namely, iteration and relevance, are in deep conflict, and opens the discussion for a potential reconciliation towards a comprehensive formal framework for knowledge dynamics.


On a plausible concept-wise multipreference semantics and its relations with self-organising maps

arXiv.org Artificial Intelligence

In this paper we describe a concept-wise multi-preference semantics for description logic which has its root in the preferential approach for modeling defeasible reasoning in knowledge representation. We argue that this proposal, beside satisfying some desired properties, such as KLM postulates, and avoiding the drowning problem, also defines a plausible notion of semantics. We motivate the plausibility of the concept-wise multi-preference semantics by developing a logical semantics of self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation, in terms of multi-preference interpretations.


Compact Belief State Representation for Task Planning

arXiv.org Artificial Intelligence

Task planning in a probabilistic belief state domains allows generating complex and robust execution policies in those domains affected by state uncertainty. The performance of a task planner relies on the belief state representation. However, current belief state representation becomes easily intractable as the number of variables and execution time grows. To address this problem, we developed a novel belief state representation based on cartesian product and union operations over belief substates. These two operations and single variable assignment nodes form And-Or directed acyclic graph of Belief State (AOBS). We show how to apply actions with probabilistic outcomes and measure the probability of conditions holding over belief state. We evaluated AOBS performance in simulated forward state space exploration. We compared the size of AOBS with the size of Binary Decision Diagrams (BDD) that were previously used to represent belief state. We show that AOBS representation is not only much more compact than a full belief state but it also scales better than BDD for most of the cases.


Revision by Conditionals: From Hook to Arrow

arXiv.org Artificial Intelligence

The belief revision literature has largely focussed on the issue of how to revise one's beliefs in the light of information regarding matters of fact. Here we turn to an important but comparatively neglected issue: How might one extend a revision operator to handle conditionals as input? Our approach to this question of 'conditional revision' is distinctive insofar as it abstracts from the controversial details of how to revise by factual sentences. We introduce a 'plug and play' method for uniquely extending any iterated belief revision operator to the conditional case. The flexibility of our approach is achieved by having the result of a conditional revision by a Ramsey Test conditional ('arrow') determined by that of a plain revision by its corresponding material conditional ('hook'). It is shown to satisfy a number of new constraints that are of independent interest.


Moore's Paradox and the logic of belief

arXiv.org Artificial Intelligence

Moores Paradox is a test case for any formal theory of belief. In Knowledge and Belief, Hintikka developed a multimodal logic for statements that express sentences containing the epistemic notions of knowledge and belief. His account purports to offer an explanation of the paradox. In this paper I argue that Hintikkas interpretation of one of the doxastic operators is philosophically problematic and leads to an unnecessarily strong logical system. I offer a weaker alternative that captures in a more accurate way our logical intuitions about the notion of belief without sacrificing the possibility of providing an explanation for problematic cases such as Moores Paradox.


Towards the Role of Theory of Mind in Explanation

arXiv.org Artificial Intelligence

Theory of Mind is commonly defined as the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others. A large body of previous work - from the social sciences to artificial intelligence - has observed that Theory of Mind capabilities are central to providing an explanation to another agent or when explaining that agent's behaviour. In this paper, we build and expand upon previous work by providing an account of explanation in terms of the beliefs of agents and the mechanism by which agents revise their beliefs given possible explanations. We further identify a set of desiderata for explanations that utilize Theory of Mind. These desiderata inform our belief-based account of explanation.


How to Do Things with Words: A Bayesian Approach

Journal of Artificial Intelligence Research

Communication changes the beliefs of the listener and of the speaker. The value of a communicative act stems from the valuable belief states which result from this act. To model this we build on the Interactive POMDP (IPOMDP) framework, which extends POMDPs to allow agents to model others in multi-agent settings, and we include communication that can take place between the agents to formulate Communicative IPOMDPs (CIPOMDPs). We treat communication as a type of action and therefore, decisions regarding communicative acts are based on decision-theoretic planning using the Bellman optimality principle and value iteration, just as they are for all other rational actions. As in any form of planning, the results of actions need to be precisely specified. We use the Bayes' theorem to derive how agents update their beliefs in CIPOMDPs; updates are due to agents' actions, observations, messages they send to other agents, and messages they receive from others. The Bayesian decision-theoretic approach frees us from the commonly made assumption of cooperative discourse - we consider agents which are free to be dishonest while communicating and are guided only by their selfish rationality. We use a simple Tiger game to illustrate the belief update, and to show that the ability to rationally communicate allows agents to improve efficiency of their interactions.


Planning in Stochastic Environments with Goal Uncertainty

arXiv.org Artificial Intelligence

We present the Goal Uncertain Stochastic Shortest Path (GUSSP) problem -- a general framework to model path planning and decision making in stochastic environments with goal uncertainty. The framework extends the stochastic shortest path (SSP) model to dynamic environments in which it is impossible to determine the exact goal states ahead of plan execution. GUSSPs introduce flexibility in goal specification by allowing a belief over possible goal configurations. The unique observations at potential goals helps the agent identify the true goal during plan execution. The partial observability is restricted to goals, facilitating the reduction to an SSP with a modified state space. We formally define a GUSSP and discuss its theoretical properties. We then propose an admissible heuristic that reduces the planning time using FLARES -- a start-of-the-art probabilistic planner. We also propose a determinization approach for solving this class of problems. Finally, we present empirical results on a search and rescue mobile robot and three other problem domains in simulation.


Explosive Proofs of Mathematical Truths

arXiv.org Artificial Intelligence

Mathematical proofs are both paradigms of certainty and some of the most explicitly-justified arguments that we have in the cultural record. Their very explicitness, however, leads to a paradox, because their probability of error grows exponentially as the argument expands. Here we show that under a cognitively-plausible belief formation mechanism that combines deductive and abductive reasoning, mathematical arguments can undergo what we call an epistemic phase transition: a dramatic and rapidly-propagating jump from uncertainty to near-complete confidence at reasonable levels of claim-to-claim error rates. To show this, we analyze an unusual dataset of forty-eight machine-aided proofs from the formalized reasoning system Coq, including major theorems ranging from ancient to 21st Century mathematics, along with four hand-constructed cases from Euclid, Apollonius, Spinoza, and Andrew Wiles. Our results bear both on recent work in the history and philosophy of mathematics, and on a question, basic to cognitive science, of how we form beliefs, and justify them to others.