Changing One's Mind: Erase or Rewind? Possibilistic Belief Revision with Fuzzy Argumentation Based on Trust

AAAI Conferences

We address the issue, in cognitive agents, of possible loss of previous information, which later might turn out to be correct when new information becomes available. To this aim, we propose a framework for changing the agent's mind without erasing forever previous information, thus allowing its recovery in case the change turns out to be wrong. In this new framework, a piece of information is represented as an argument which can be more or less accepted depending on the trustworthiness of the agent who proposes it. We adopt possibility theory to represent uncertainty about the information, and to model the fact that information sources can be only partially trusted. The originality of the proposed framework lies in the following two points: (i) argument reinstatement is mirrored in belief reinstatement in order to avoid the loss of previous information; (ii) new incoming information is represented under the form of arguments and it is associated with a plausibility degree depending on the trustworthiness of the information source.


Studies in Credibility-Limited Base Revision

AAAI Conferences

In this paper we present axiomatic characterizations for several classes of credibility-limited base revision functions and establish the interrelation among those classes. We also propose and axiomatically characterize two new base revision functions.


A Logic for Reasoning about Justified Uncertain Beliefs

AAAI Conferences

Justification logic originated from the study of the logic of proofs.  However, in a more general setting, it may be regarded as a kind of explicit epistemic logic. In such logic, the reasons why a fact is believed are explicitly represented as justification terms.  Traditionally, the modeling of uncertain beliefs is crucially important for epistemic reasoning. While graded modal logics interpreted with possibility theory semantics have been successfully applied to the representation and reasoning of uncertain beliefs, they cannot keep track of the reasons why an agent believes a fact. The objective of this paper is to extend the graded modal logics with explicit justifications. We introduce a possibilistic justification logic,  present its syntax and semantics, and investigate its meta-properties, such as soundness, completeness, and realizability.


Description Logics and Fuzzy Probability

AAAI Conferences

Uncertainty and vagueness are pervasive phenomena in real-life knowledge. They are supported in extended description logics that adapt classical description logics to deal with numerical probabilities or fuzzy truth degrees. While the two concepts are distinguished for good reasons, they combine in the notion of probably, which is ultimately a fuzzy qualification of probabilities. Here, we develop existing propositional logics of fuzzy probability into a full-blown description logic, and we show decidability of several variants of this logic under Lukasiewicz semantics. We obtain these results in a novel generic framework of fuzzy coalgebraic logic; this enables us to extend our results to logics that combine crisp ingredients including standard crisp roles and crisp numerical probabilities with fuzzy roles and fuzzy probabilities.


Reasoning about Fuzzy Belief and Common Belief: With Emphasis on Incomparable Beliefs

AAAI Conferences

We formalize reasoning about fuzzy belief and fuzzy common belief, especially incomparable beliefs, in multi-agent systems by using a logical system based on Fitting's many-valued modal logic, where incomparable beliefs mean beliefs whose degrees are not totally ordered. Completeness and decidability results for the logic of fuzzy belief and common belief are established while implicitly exploiting the duality-theoretic perspective on Fitting's logic that builds upon the author's previous work. A conceptually novel feature is that incomparable beliefs and qualitative fuzziness can be formalized in the developed system, whereas they cannot be formalized in previously proposed systems for reasoning about fuzzy belief. We believe that belief degrees can ultimately be reduced to truth degrees, and we call this "the reduction thesis about belief degrees", which is assumed in the present paper and motivates an axiom of our system. We finally argue that fuzzy reasoning sheds new light on old epistemic issues such as coordinated attack problem.