nonmonotonic reasoning


Nonmonotonic Reasoning

Journal of Artificial Intelligence Research

Nonmonotonic reasoning concerns situations when information is incomplete or uncertain. Thus, conclusions drawn lack iron-clad certainty that comes with classical logic reasoning. New information, even if the original one is retained, may change conclusions. Formal ways to capture mechanisms involved in nonmonotonic reasoning, and to exploit them for computation as in the answer set programming paradigm are at the heart of this research area. The six papers accepted for the special track contain significant contributions to the foundations of logic programming under the answer set semantics, to nonmonotonic extensions of description logics, to belief change in restricted settings, and to argumentation.


A Unified Framework for Nonmonotonic Reasoning with Vagueness and Uncertainty

arXiv.org Artificial Intelligence

Answer set programming (ASP) is a declarative problem solvi ng paradigm for nonmonotonic reasoning. ASP allows intuitiive represe ntation of combinatorial search and optimization problems and is widely use d for knowledge representation and reasoning in various applications like plan generation, natural language processing etc [14, 15]. But ASP can not dea l with fuzzy information, where attributes and truth degrees lie in a con tinuous range of values. Fuzzy Answer Set Programming (F ASP) is proposed as a n extension of ASP that allows graded truth values from the interval [0,1 ]. Theoretical advancement of F ASP is remarkable [18, 32, 9, 22, 23].


Applications of Linear Defeasible Logic: combining resource consumption and exceptions to energy management and business processes

arXiv.org Artificial Intelligence

Linear Logic and Defeasible Logic have been adopted to formalise different features of knowledge representation: consumption of resources, and non monotonic reasoning in particular to represent exceptions. Recently, a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects to handle potentially conflicting information, has been discussed in literature, by some of the authors. Two applications emerged that are very relevant: energy management and business process management. We illustrate a set of guide lines to determine how to apply linear defeasible logic to those contexts.


About epistemic negation and world views in Epistemic Logic Programs

arXiv.org Artificial Intelligence

In this paper we consider Epistemic Logic Programs, which extend Answer Set Programming (ASP) with "epistemic operators" and "epistemic negation", and a recent approach to the semantics of such programs in terms of World Views. We propose some observations on the existence and number of world views. We show how to exploit an extended ASP semantics in order to: (i) provide a characterization of world views, different from existing ones; (ii) query world views and query the whole set of world views.


A Unified Algebraic Framework for Non-Monotonicity

arXiv.org Artificial Intelligence

Tremendous research effort has been dedicated over the years to thoroughly investigate non-monotonic reasoning. With the abundance of non-monotonic logical formalisms, a unified theory that enables comparing the different approaches is much called for. In this paper, we present an algebraic graded logic we refer to as LogAG capable of encompassing a wide variety of non-monotonic formalisms. We build on Lin and Shoham's argument systems first developed to formalize non-monotonic commonsense reasoning. We show how to encode argument systems as LogAG theories, and prove that LogAG captures the notion of belief spaces in argument systems. Since argument systems capture default logic, autoepistemic logic, the principle of negation as failure, and circumscription, our results show that LogAG captures the before-mentioned non-monotonic logical formalisms as well. Previous results show that LogAG subsumes possibilistic logic and any non-monotonic inference relation satisfying Makinson's rationality postulates. In this way, LogAG provides a powerful unified framework for non-monotonicity.


On Rational Monotony and Weak Rational Monotony for Inference Relations Induced by Sets of Minimal C-Representations

AAAI Conferences

Reasoning in the context of a conditional knowledge base containing rules of the form ’If A then usually B’ can be defined in terms of preference relations on possible worlds. These preference relations can be modeled by ranking functions that assign a degree of disbelief to each possible world. In general, there are multiple ranking functions that accept a given knowledge base. Several nonmonotonic inference relations have been proposed using c-representations, a subset of all ranking functions. These inference relations take subsets of all c-representations based on various notions of minimality into account, and they operate in different inference modes, i.e., skeptical, weakly skeptical, or credulous. For nonmonotonic inference relations, weaker versions of monotonicity like rational monotony (RM) and weak rational monotony (WRM) have been developed. In this paper, we investigate which of the inference relations induced by sets of minimal c-representations satisfy rational monotony or weak rational monotony.


A reconstruction of the multipreference closure

arXiv.org Artificial Intelligence

The paper describes a preferential approach for dealing with exceptions in KLM preferential logics, based on the rational closure. It is well known that the rational closure does not allow an independent handling of the inheritance of different defeasible properties of concepts. Several solutions have been proposed to face this problem and the lexicographic closure is the most notable one. In this work, we consider an alternative closure construction, called the Multi Preference closure (MP-closure), that has been first considered for reasoning with exceptions in DLs. Here, we reconstruct the notion of MP-closure in the propositional case and we show that it is a natural variant of Lehmann's lexicographic closure. Abandoning Maximal Entropy (an alternative route already considered but not explored by Lehmann) leads to a construction which exploits a different lexicographic ordering w.r.t. the lexicographic closure, and determines a preferential consequence relation rather than a rational consequence relation. We show that, building on the MP-closure semantics, rationality can be recovered, at least from the semantic point of view, resulting in a rational consequence relation which is stronger than the rational closure, but incomparable with the lexicographic closure. We also show that the MP-closure is stronger than the Relevant Closure.


Optimizing Answer Set Computation via Heuristic-Based Decomposition

arXiv.org Artificial Intelligence

Answer Set Programming (ASP) is a purely declarative formalism developed in the field of logic programming and nonmonotonic reasoning: computational problems are encoded by logic programs whose answer sets, corresponding to solutions, are computed by an ASP system. Different, semantically equivalent, programs can be defined for the same problem; however, performance of systems evaluating them might significantly vary. We propose an approach for automatically transforming an input logic program into an equivalent one that can be evaluated more efficiently. One can make use of existing tree-decomposition techniques for rewriting selected rules into a set of multiple ones; the idea is to guide and adaptively apply them on the basis of proper new heuristics, to obtain a smart rewriting algorithm to be integrated into an ASP system. The method is rather general: it can be adapted to any system and implement different preference policies. Furthermore, we define a set of new heuristics tailored at optimizing grounding, one of the main phases of the ASP computation; we use them in order to implement the approach into the ASP system DLV, in particular into its grounding subsystem I-DLV, and carry out an extensive experimental activity for assessing the impact of the proposal. Under consideration in Theory and Practice of Logic Programming (TPLP).


On Laws and Counterfactuals in Causal Reasoning

AAAI Conferences

We explore the relationships between causal rules and counterfactuals, as well as their relative representation capabilities, in the logical framework of the causal calculus. It will be shown that, though counterfactuals are readily definable on the basis of causal rules, the reverse reduction is achievable only up to a certain logical threshold (basic equivalence). As a result, we will argue that counterfactuals cannot distinguish causal theories that justify different claims of actual causation, which could be seen as the main source of the problem of `structural equivalents' in counterfactual approaches to causation. This will lead us to a general conclusion about the primary role of causal rules in representing causation.


Default Reasoning via Topology and Mathematical Analysis: A Preliminary Report

AAAI Conferences

A default consequence relation α|~β (if α, then normally β) can be naturally interpreted via a `most' generalized quantifier: α|~β is valid iff in `most' α-worlds, β is also true. We define various semantic incarnations of this principle which attempt to make the set of (α ∧ β)-worlds `large' and the set of (α ∧ ¬ β)-worlds `small'. The straightforward implementation of this idea on finite sets is via `clear majority'. We proceed to examine different `majority' interpretations of normality which are defined upon notions of classical mathematics which formalize aspects of `size'. We define default consequence using the notion of asymptotic density from analytic number theory. Asymptotic density provides a way to measure the size of integer sequences in a way much more fine-grained and accurate than set cardinality. Further on, in a topological setting, we identify `large' sets with dense sets and `negligibly small' sets with nowhere dense sets. Finally, we define default consequence via the concept of measure, classically developed in mathematical analysis for capturing `size' through a generalization of the notions of length, area and volume. The logics defined via asymptotic density and measure are weaker than the KLM system P, the so-called `conservative core' of nonmonotonic reasoning, and they resemble to probabilistic consequence. Topology goes a longer way towards system P but it misses Cautious Monotony (CM) and AND. Our results show that a `size'-oriented interpretation of default reasoning is context-sensitive and in `most' cases it departs from the preferential approach.