Goto

Collaborating Authors

Belief Revision


On the Relationship Between KR Approaches for Explainable Planning

arXiv.org Artificial Intelligence

In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.


Bayes Meets Entailment and Prediction: Commonsense Reasoning with Non-monotonicity, Paraconsistency and Predictive Accuracy

arXiv.org Artificial Intelligence

The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.


Quantum d-separation and quantum belief propagation

arXiv.org Artificial Intelligence

The goal of this paper is to generalize classical d-separation and classical Belief Propagation (BP) to the quantum realm. Classical d-separation is an essential ingredient of most of Judea Pearl's work. It is crucial to all 3 rungs of what Pearl calls the 3 rungs of Causation. So having a quantum version of d-separation and BP probably implies that most of Pearl's Bayesian networks work, including his theory of causality, can be translated in a straightforward manner to the quantum realm.


Quantum d-separation and quantum belief propagation à la Pearl

#artificialintelligence

I've just written a new paper entitled: Quantum d-separation and quantum belief propagation . Here is the abstract: The goal of this paper is to generalize cl…


Multi-Agent Decentralized Belief Propagation on Graphs

arXiv.org Artificial Intelligence

We consider the problem of interactive partially observable Markov decision processes (I-POMDPs), where the agents are located at the nodes of a communication network. Specifically, we assume a certain message type for all messages. Moreover, each agent makes individual decisions based on the interactive belief states, the information observed locally and the messages received from its neighbors over the network. Within this setting, the collective goal of the agents is to maximize the globally averaged return over the network through exchanging information with their neighbors. We propose a decentralized belief propagation algorithm for the problem, and prove the convergence of our algorithm. Finally we show multiple applications of our framework. Our work appears to be the first study of decentralized belief propagation algorithm for networked multi-agent I-POMDPs.


Acoustics Based Intent Recognition Using Discovered Phonetic Units for Low Resource Languages

arXiv.org Artificial Intelligence

With recent advancements in language technologies, humansare now interacting with technology through speech. To in-crease the reach of these technologies, we need to build suchsystems in local languages. A major bottleneck here are theunderlying data-intensive parts that make up such systems,including automatic speech recognition (ASR) systems thatrequire large amounts of labelled data. With the aim of aidingdevelopment of dialog systems in low resourced languages,we propose a novel acoustics based intent recognition systemthat uses discovered phonetic units for intent classification.The system is made up of two blocks - the first block gen-erates a transcript of discovered phonetic units for the inputaudio, and the second block which performs intent classifi-cation from the generated phonemic transcripts. Our workpresents results for such a system for two languages families- Indic languages and Romance languages, for two differentintent recognition tasks. We also perform multilingual train-ing of our intent classifier and show improved cross-lingualtransfer and performance on an unknown language with zeroresources in the same language family.


A Feedback Scheme to Reorder a Multi-Agent Execution Schedule by Persistently Optimizing a Switchable Action Dependency Graph

arXiv.org Artificial Intelligence

In this paper we consider multiple Automated Guided Vehicles (AGVs) navigating a common workspace to fulfill various intralogistics tasks, typically formulated as the Multi-Agent Path Finding (MAPF) problem. To keep plan execution deadlock-free, one approach is to construct an Action Dependency Graph (ADG) which encodes the ordering of AGVs as they proceed along their routes. Using this method, delayed AGVs occasionally require others to wait for them at intersections, thereby affecting the plan execution efficiency. If the workspace is shared by dynamic obstacles such as humans or third party robots, AGVs can experience large delays. A common mitigation approach is to re-solve the MAPF using the current, delayed AGV positions. However, solving the MAPF is time-consuming, making this approach inefficient, especially for large AGV teams. In this work, we present an online method to repeatedly modify a given acyclic ADG to minimize route completion times of each AGV. Our approach persistently maintains an acyclic ADG, necessary for deadlock-free plan execution. We evaluate the approach by considering simulations with random disturbances on the execution and show faster route completion times compared to the baseline ADG-based execution management approach.


On the use of evidence theory in belief base revision

arXiv.org Artificial Intelligence

This paper deals with belief base revision that is a form of belief change consisting of the incorporation of new facts into an agent's beliefs represented by a finite set of propositional formulas. In the aim to guarantee more reliability and rationality for real applications while performing revision, we propose the idea of credible belief base revision yielding to define two new formula-based revision operators using the suitable tools offered by evidence theory. These operators, uniformly presented in the same spirit of others in [9], stem from consistent subbases maximal with respect to credibility instead of set inclusion and cardinality. Moreover, in between these two extremes operators, evidence theory let us shed some light on a compromise operator avoiding losing initial beliefs to the maximum extent possible. Its idea captures maximal consistent sets stemming from all possible intersections of maximal consistent subbases. An illustration of all these operators and a comparison with others are inverstigated by examples.


Recursive Experts: An Efficient Optimal Mixture of Learning Systems in Dynamic Environments

arXiv.org Machine Learning

Sequential learning systems are used in a wide variety of problems from decision making to optimization, where they provide a 'belief' (opinion) to nature, and then update this belief based on the feedback (result) to minimize (or maximize) some cost or loss (conversely, utility or gain). The goal is to reach an objective by exploiting the temporal relation inherent to the nature's feedback (state). By exploiting this relation, specific learning systems can be designed that perform asymptotically optimal for various applications. However, if the framework of the problem is not stationary, i.e., the nature's state sometimes changes arbitrarily, the past cumulative belief revision done by the system may become useless and the system may fail if it lacks adaptivity. While this adaptivity can be directly implemented in specific cases (e.g., convex optimization), it is mostly not straightforward for general learning tasks. To this end, we propose an efficient optimal mixture framework for general sequential learning systems, which we call the recursive experts for dynamic environments. For this purpose, we design hyper-experts that incorporate the learning systems at our disposal and recursively merge in a specific way to achieve minimax optimal regret bounds up to constant factors. The multiplicative increases in computational complexity from the initial system to our adaptive system are only logarithmic-in-time factors.


A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning

arXiv.org Artificial Intelligence

Structured belief states are crucial for user goal tracking and database query in task-oriented dialog systems. However, training belief trackers often requires expensive turn-level annotations of every user utterance. In this paper we aim at alleviating the reliance on belief state labels in building end-to-end dialog systems, by leveraging unlabeled dialog data towards semi-supervised learning. We propose a probabilistic dialog model, called the LAtent BElief State (LABES) model, where belief states are represented as discrete latent variables and jointly modeled with system responses given user inputs. Such latent variable modeling enables us to develop semi-supervised learning under the principled variational learning framework. Furthermore, we introduce LABES-S2S, which is a copy-augmented Seq2Seq model instantiation of LABES. In supervised experiments, LABES-S2S obtains strong results on three benchmark datasets of different scales. In utilizing unlabeled dialog data, semi-supervised LABES-S2S significantly outperforms both supervised-only and semi-supervised baselines. Remarkably, we can reduce the annotation demands to 50% without performance loss on MultiWOZ.