Goto

Collaborating Authors

Nonmonotonic Logic


Logic and Decision-Theoretic Methods for Planning under Uncertainty

AI Magazine

Decision theory and nonmonotonic logics are formalisms that can be employed to represent and solve problems of planning under uncertainty. We analyze the usefulness of these two approaches by establishing a simple correspondence between the two formalisms. The analysis indicates that planning using nonmonotonic logic comprises two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of preference for planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of nonmonotonic reasoning: (1) decision theory and nonmonotonic logics are intended to solve different components of the planning problem; (2) when considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical (monotonic) logic; and (3) because certain nonmonotonic programming paradigms (for example, frame-based inheritance, nonmonotonic logics) are inherently problem specific, they might be inappropriate for use in solving certain types of planning problems. We discuss how these conclusions affect several current AI research issues.


Nonmonotonic Reasoning

Journal of Artificial Intelligence Research

Nonmonotonic reasoning concerns situations when information is incomplete or uncertain. Thus, conclusions drawn lack iron-clad certainty that comes with classical logic reasoning. New information, even if the original one is retained, may change conclusions. Formal ways to capture mechanisms involved in nonmonotonic reasoning, and to exploit them for computation as in the answer set programming paradigm are at the heart of this research area. The six papers accepted for the special track contain significant contributions to the foundations of logic programming under the answer set semantics, to nonmonotonic extensions of description logics, to belief change in restricted settings, and to argumentation.


Applications of Linear Defeasible Logic: combining resource consumption and exceptions to energy management and business processes

arXiv.org Artificial Intelligence

Linear Logic and Defeasible Logic have been adopted to formalise different features of knowledge representation: consumption of resources, and non monotonic reasoning in particular to represent exceptions. Recently, a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects to handle potentially conflicting information, has been discussed in literature, by some of the authors. Two applications emerged that are very relevant: energy management and business process management. We illustrate a set of guide lines to determine how to apply linear defeasible logic to those contexts.


On Rational Monotony and Weak Rational Monotony for Inference Relations Induced by Sets of Minimal C-Representations

AAAI Conferences

Reasoning in the context of a conditional knowledge base containing rules of the form ’If A then usually B’ can be defined in terms of preference relations on possible worlds. These preference relations can be modeled by ranking functions that assign a degree of disbelief to each possible world. In general, there are multiple ranking functions that accept a given knowledge base. Several nonmonotonic inference relations have been proposed using c-representations, a subset of all ranking functions. These inference relations take subsets of all c-representations based on various notions of minimality into account, and they operate in different inference modes, i.e., skeptical, weakly skeptical, or credulous. For nonmonotonic inference relations, weaker versions of monotonicity like rational monotony (RM) and weak rational monotony (WRM) have been developed. In this paper, we investigate which of the inference relations induced by sets of minimal c-representations satisfy rational monotony or weak rational monotony.


Controlled Natural Languages and Default Reasoning

arXiv.org Artificial Intelligence

Controlled natural languages (CNLs) are effective languages for knowledge representation and reasoning. They are designed based on certain natural languages with restricted lexicon and grammar. CNLs are unambiguous and simple as opposed to their base languages. They preserve the expressiveness and coherence of natural languages. In this report, we focus on a class of CNLs, called machine-oriented CNLs, which have well-defined semantics that can be deterministically translated into formal languages, such as Prolog, to do logical reasoning. Over the past 20 years, a number of machine-oriented CNLs emerged and have been used in many application domains for problem solving and question answering. However, few of them support non-monotonic inference. In our work, we propose non-monotonic extensions of CNL to support defeasible reasoning. In the first part of this report, we survey CNLs and compare three influential systems: Attempto Controlled English (ACE), Processable English (PENG), and Computer-processable English (CPL). We compare their language design, semantic interpretations, and reasoning services. In the second part of this report, we first identify typical non-monotonicity in natural languages, such as defaults, exceptions and conversational implicatures. Then, we propose their representation in CNL and the corresponding formalizations in a form of defeasible reasoning known as Logic Programming with Defaults and Argumentation Theory (LPDA).


A reconstruction of the multipreference closure

arXiv.org Artificial Intelligence

The paper describes a preferential approach for dealing with exceptions in KLM preferential logics, based on the rational closure. It is well known that the rational closure does not allow an independent handling of the inheritance of different defeasible properties of concepts. Several solutions have been proposed to face this problem and the lexicographic closure is the most notable one. In this work, we consider an alternative closure construction, called the Multi Preference closure (MP-closure), that has been first considered for reasoning with exceptions in DLs. Here, we reconstruct the notion of MP-closure in the propositional case and we show that it is a natural variant of Lehmann's lexicographic closure. Abandoning Maximal Entropy (an alternative route already considered but not explored by Lehmann) leads to a construction which exploits a different lexicographic ordering w.r.t. the lexicographic closure, and determines a preferential consequence relation rather than a rational consequence relation. We show that, building on the MP-closure semantics, rationality can be recovered, at least from the semantic point of view, resulting in a rational consequence relation which is stronger than the rational closure, but incomparable with the lexicographic closure. We also show that the MP-closure is stronger than the Relevant Closure.


An Approach to Characterize Graded Entailment of Arguments through a Label-based Framework

arXiv.org Artificial Intelligence

Argumentation theory is a powerful paradigm that formalizes a type of commonsense reasoning that aims to simulate the human ability to resolve a specific problem in an intelligent manner. A classical argumentation process takes into account only the properties related to the intrinsic logical soundness of an argument in order to determine its acceptability status. However, these properties are not always the only ones that matter to establish the argument's acceptability---there exist other qualities, such as strength, weight, social votes, trust degree, relevance level, and certainty degree, among others.


On Laws and Counterfactuals in Causal Reasoning

AAAI Conferences

We explore the relationships between causal rules and counterfactuals, as well as their relative representation capabilities, in the logical framework of the causal calculus. It will be shown that, though counterfactuals are readily definable on the basis of causal rules, the reverse reduction is achievable only up to a certain logical threshold (basic equivalence). As a result, we will argue that counterfactuals cannot distinguish causal theories that justify different claims of actual causation, which could be seen as the main source of the problem of `structural equivalents' in counterfactual approaches to causation. This will lead us to a general conclusion about the primary role of causal rules in representing causation.


Default Reasoning via Topology and Mathematical Analysis: A Preliminary Report

AAAI Conferences

A default consequence relation α|~β (if α, then normally β) can be naturally interpreted via a `most' generalized quantifier: α|~β is valid iff in `most' α-worlds, β is also true. We define various semantic incarnations of this principle which attempt to make the set of (α ∧ β)-worlds `large' and the set of (α ∧ ¬ β)-worlds `small'. The straightforward implementation of this idea on finite sets is via `clear majority'. We proceed to examine different `majority' interpretations of normality which are defined upon notions of classical mathematics which formalize aspects of `size'. We define default consequence using the notion of asymptotic density from analytic number theory. Asymptotic density provides a way to measure the size of integer sequences in a way much more fine-grained and accurate than set cardinality. Further on, in a topological setting, we identify `large' sets with dense sets and `negligibly small' sets with nowhere dense sets. Finally, we define default consequence via the concept of measure, classically developed in mathematical analysis for capturing `size' through a generalization of the notions of length, area and volume. The logics defined via asymptotic density and measure are weaker than the KLM system P, the so-called `conservative core' of nonmonotonic reasoning, and they resemble to probabilistic consequence. Topology goes a longer way towards system P but it misses Cautious Monotony (CM) and AND. Our results show that a `size'-oriented interpretation of default reasoning is context-sensitive and in `most' cases it departs from the preferential approach.


Answering the "why" in Answer Set Programming - A Survey of Explanation Approaches

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) approaches to problem-solving and decision-making are becoming more and more complex, leading to a decrease in the understandability of solutions. The European Union's new General Data Protection Regulation tries to tackle this problem by stipulating a "right to explanation" for decisions made by AI systems. One of the AI paradigms that may be affected by this new regulation is Answer Set Programming (ASP). Thanks to the emergence of efficient solvers, ASP has recently been used for problem-solving in a variety of domains, including medicine, cryptography, and biology. To ensure the successful application of ASP as a problem-solving paradigm in the future, explanations of ASP solutions are crucial. In this survey, we give an overview of approaches that provide an answer to the question of why an answer set is a solution to a given problem, notably off-line justifications, causal graphs, argumentative explanations and why-not provenance, and highlight their similarities and differences. Moreover, we review methods explaining why a set of literals is not an answer set or why no solution exists at all.