Collaborating Authors



AAAI Conferences

In this paper, we address the problem of applying AGM-style belief revision to non-classical logics. We discuss the idea of minimal change in revision and show that for non-classical logics, some sort of minimality postulate has to be explicitly introduced. We also present two constructions for revision which satisfy the AGM postulates and prove the representation theorems including minimality postulates.


AAAI Conferences

Formalizing dynamics of argumentation has received increasing attention over the last years. While AGM-like representation results for revision of argumentation frameworks (AFs) are now available, similar results for the problem of merging are still missing. In this paper, we close this gap and adapt model-based propositional belief merging to define extension-based merging operators for AFs. We state an axiomatic and a constructive characterization of merging operators through a family of rationality postulates and a representation theorem. Then we exhibit merging operators which satisfy the postulates. In contrast to the case of revision, we observe that obtaining a single framework as result of merging turns out to be a more subtle issue. Finally, we establish links between our new results and previous approaches to merging of AFs, which mainly relied on axioms from Social Choice Theory, but lacked AGM-like representation theorems.

Conditional Inference and Activation of Knowledge Entities in ACT-R Artificial Intelligence

Activation-based conditional inference applies conditional reasoning to ACT-R, a cognitive architecture developed to formalize human reasoning. The idea of activation-based conditional inference is to determine a reasonable subset of a conditional belief base in order to draw inductive inferences in time. Central to activation-based conditional inference is the activation function which assigns to the conditionals in the belief base a degree of activation mainly based on the conditional's relevance for the current query and its usage history.

Deep Interpretable Models of Theory of Mind For Human-Agent Teaming Artificial Intelligence

When developing AI systems that interact with humans, it is essential to design both a system that can understand humans, and a system that humans can understand. Most deep network based agent-modeling approaches are 1) not interpretable and 2) only model external behavior, ignoring internal mental states, which potentially limits their capability for assistance, interventions, discovering false beliefs, etc. To this end, we develop an interpretable modular neural framework for modeling the intentions of other observed entities. We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft, and show that incorporating interpretability can significantly increase predictive performance under the right conditions.

A Qualitative Theory of Cognitive Attitudes and their Change Artificial Intelligence

Since the seminal work of Hintikka on epistemic logic [28], of Von Wright on the logic of preference [55, 56] and of Cohen & Levesque on the logic of intention [19], many formal logics for reasoning about cognitive attitudes of agents such as knowledge and belief [24], preference [32, 48], desire [23], intention [44, 30] and their combination [38, 54] have been proposed. Generally speaking, these logics are nothing but formal models of rational agency relying on the idea that an agent endowed with cognitive attitudes makes decisions on the basis of what she believes and of what she desires or prefers. The idea of describing rational agents in terms of their epistemic and motivational attitudes is something that these logics share with classical decision theory and game theory. Classical decision theory and game theory provide a quantitative account of individual and strategic decision-making by assuming that agents' beliefs and desires can be respectively modeled by subjective probabilities and utilities. Qualitative approaches to individual and strategic decision-making have been proposed in AI [16, 22] to characterize criteria that a rational agent should adopt for making decisions when she cannot build a probability distribution over the set of possible events and her preference over the set of possible outcomes cannot be expressed by a utility function but only by a qualitative ordering over the outcomes.

On the Relationship Between KR Approaches for Explainable Planning Artificial Intelligence

In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning. We also provide a detailed exposition on the relationship between similar KR techniques, such as abductive explanations and belief change, and their applicability to explainable planning.

On a plausible concept-wise multipreference semantics and its relations with self-organising maps Artificial Intelligence

In this paper we describe a concept-wise multi-preference semantics for description logic which has its root in the preferential approach for modeling defeasible reasoning in knowledge representation. We argue that this proposal, beside satisfying some desired properties, such as KLM postulates, and avoiding the drowning problem, also defines a plausible notion of semantics. We motivate the plausibility of the concept-wise multi-preference semantics by developing a logical semantics of self-organising maps, which have been proposed as possible candidates to explain the psychological mechanisms underlying category generalisation, in terms of multi-preference interpretations.

Goal Recognition over Imperfect Domain Models Artificial Intelligence

Goal recognition is the problem of recognizing the intended goal of autonomous agents or humans by observing their behavior in an environment. Over the past years, most existing approaches to goal and plan recognition have been ignoring the need to deal with imperfections regarding the domain model that formalizes the environment where autonomous agents behave. In this thesis, we introduce the problem of goal recognition over imperfect domain models, and develop solution approaches that explicitly deal with two distinct types of imperfect domains models: (1) incomplete discrete domain models that have possible, rather than known, preconditions and effects in action descriptions; and (2) approximate continuous domain models, where the transition function is approximated from past observations and not well-defined. We develop novel goal recognition approaches over imperfect domains models by leveraging and adapting existing recognition approaches from the literature. Experiments and evaluation over these two types of imperfect domains models show that our novel goal recognition approaches are accurate in comparison to baseline approaches from the literature, at several levels of observability and imperfections.

Towards the Role of Theory of Mind in Explanation Artificial Intelligence

Theory of Mind is commonly defined as the ability to attribute mental states (e.g., beliefs, goals) to oneself, and to others. A large body of previous work - from the social sciences to artificial intelligence - has observed that Theory of Mind capabilities are central to providing an explanation to another agent or when explaining that agent's behaviour. In this paper, we build and expand upon previous work by providing an account of explanation in terms of the beliefs of agents and the mechanism by which agents revise their beliefs given possible explanations. We further identify a set of desiderata for explanations that utilize Theory of Mind. These desiderata inform our belief-based account of explanation.

A Temporal Module for Logical Frameworks Artificial Intelligence

In the literature there different kind of timed logical fram eworks exist, where time is specified directly using hybrid logics (cf., e.g., [2]), temporal epistemic lo gic (cf., e.g., [4]) or simply by using Linear Temporal Logic. We propose a temporal module which can be ado pted to "temporalize" many logical framework. This module is in practice a particular kind of fu nction that assigns a "timing" to atoms. We have exploited this T function in two different settings. The first one is the formalization of the reasoning on the formation of beliefs and the interaction wi th background knowledge in non-omniscient agents' memory.