When Logical Conclusions Do Not Hold True. Inference rules are called nonmonotonic when they allow intelligent systems "to augment their beliefs by new ones that do not logically follow from their explicit ones" and this or another inference may have to be retracted.
Ordinary inference rules are monotonic "because the set of theorems derivable from premises is not reduced by adding to the premises."
– from Logical foundations of artificial intelligence by MR Genesereth and NJ Nilsson (1987)
This paper addresses the challenge of modeling human reasoning, within a new framework called Cognitive Argumentation. This framework rests on the assumption that human logical reasoning is inherently a process of dialectic argumentation and aims to develop a cognitive model for human reasoning that is computational and implementable. To give logical reasoning a human cognitive form the framework relies on cognitive principles, based on empirical and theoretical work in Cognitive Science, to suitably adapt a general and abstract framework of computational argumentation from AI. The approach of Cognitive Argumentation is evaluated with respect to Byrne's suppression task, where the aim is not only to capture the suppression effect between different groups of people but also to account for the variation of reasoning within each group. Two main cognitive principles are particularly important to capture human conditional reasoning that explain the participants' responses: (i) the interpretation of a condition within a conditional as sufficient and/or necessary and (ii) the mode of reasoning either as predictive or explanatory. We argue that Cognitive Argumentation provides a coherent and cognitively adequate model for human conditional reasoning that allows a natural distinction between definite and plausible conclusions, exhibiting the important characteristics of context-sensitive and defeasible reasoning.
Nonmonotonic reasoning concerns situations when information is incomplete or uncertain. Thus, conclusions drawn lack iron-clad certainty that comes with classical logic reasoning. New information, even if the original one is retained, may change conclusions. Formal ways to capture mechanisms involved in nonmonotonic reasoning, and to exploit them for computation as in the answer set programming paradigm are at the heart of this research area. The six papers accepted for the special track contain significant contributions to the foundations of logic programming under the answer set semantics, to nonmonotonic extensions of description logics, to belief change in restricted settings, and to argumentation.
Linear Logic and Defeasible Logic have been adopted to formalise different features of knowledge representation: consumption of resources, and non monotonic reasoning in particular to represent exceptions. Recently, a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects to handle potentially conflicting information, has been discussed in literature, by some of the authors. Two applications emerged that are very relevant: energy management and business process management. We illustrate a set of guide lines to determine how to apply linear defeasible logic to those contexts.
Reasoning in the context of a conditional knowledge base containing rules of the form ’If A then usually B’ can be defined in terms of preference relations on possible worlds. These preference relations can be modeled by ranking functions that assign a degree of disbelief to each possible world. In general, there are multiple ranking functions that accept a given knowledge base. Several nonmonotonic inference relations have been proposed using c-representations, a subset of all ranking functions. These inference relations take subsets of all c-representations based on various notions of minimality into account, and they operate in different inference modes, i.e., skeptical, weakly skeptical, or credulous. For nonmonotonic inference relations, weaker versions of monotonicity like rational monotony (RM) and weak rational monotony (WRM) have been developed. In this paper, we investigate which of the inference relations induced by sets of minimal c-representations satisfy rational monotony or weak rational monotony.
The paper describes a preferential approach for dealing with exceptions in KLM preferential logics, based on the rational closure. It is well known that the rational closure does not allow an independent handling of the inheritance of different defeasible properties of concepts. Several solutions have been proposed to face this problem and the lexicographic closure is the most notable one. In this work, we consider an alternative closure construction, called the Multi Preference closure (MP-closure), that has been first considered for reasoning with exceptions in DLs. Here, we reconstruct the notion of MP-closure in the propositional case and we show that it is a natural variant of Lehmann's lexicographic closure. Abandoning Maximal Entropy (an alternative route already considered but not explored by Lehmann) leads to a construction which exploits a different lexicographic ordering w.r.t. the lexicographic closure, and determines a preferential consequence relation rather than a rational consequence relation. We show that, building on the MP-closure semantics, rationality can be recovered, at least from the semantic point of view, resulting in a rational consequence relation which is stronger than the rational closure, but incomparable with the lexicographic closure. We also show that the MP-closure is stronger than the Relevant Closure.
Koutras, Costas D. (American University of the Middle East) | Liaskos, Konstantinos (American University of the Middle East) | Moyzes, Christos (University of Liverpool) | Rantsoudis, Christos (Institut de Recherche en Informatique de Toulouse)
A default consequence relation α|~β (if α, then normally β) can be naturally interpreted via a `most' generalized quantifier: α|~β is valid iff in `most' α-worlds, β is also true. We define various semantic incarnations of this principle which attempt to make the set of (α ∧ β)-worlds `large' and the set of (α ∧ ¬ β)-worlds `small'. The straightforward implementation of this idea on finite sets is via `clear majority'. We proceed to examine different `majority' interpretations of normality which are defined upon notions of classical mathematics which formalize aspects of `size'. We define default consequence using the notion of asymptotic density from analytic number theory. Asymptotic density provides a way to measure the size of integer sequences in a way much more fine-grained and accurate than set cardinality. Further on, in a topological setting, we identify `large' sets with dense sets and `negligibly small' sets with nowhere dense sets. Finally, we define default consequence via the concept of measure, classically developed in mathematical analysis for capturing `size' through a generalization of the notions of length, area and volume. The logics defined via asymptotic density and measure are weaker than the KLM system P, the so-called `conservative core' of nonmonotonic reasoning, and they resemble to probabilistic consequence. Topology goes a longer way towards system P but it misses Cautious Monotony (CM) and AND. Our results show that a `size'-oriented interpretation of default reasoning is context-sensitive and in `most' cases it departs from the preferential approach.
We explore the relationships between causal rules and counterfactuals, as well as their relative representation capabilities, in the logical framework of the causal calculus. It will be shown that, though counterfactuals are readily definable on the basis of causal rules, the reverse reduction is achievable only up to a certain logical threshold (basic equivalence). As a result, we will argue that counterfactuals cannot distinguish causal theories that justify different claims of actual causation, which could be seen as the main source of the problem of `structural equivalents' in counterfactual approaches to causation. This will lead us to a general conclusion about the primary role of causal rules in representing causation.
Linear Logic and Defeasible Logic have been adopted to formalise different features relevant to agents: consumption of resources, and reasoning with exceptions. We propose a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects, and we discuss the design choices for the framework.
The publication of the seminal issue on nonmonotonic logics by the Art ificial Intelligence Journal in 1980 resulted in a new area of research in kno wledge representation and changed the mainstream paradigm of logic that originated in antiquity. It established an important area of mathematical logic a nd led to discoveries of connections between logic, knowledge representat ion and computation which attracted not only computer scientists but also logician s, mathematicians and philosophers. Importantly, it also changed the pers pective on applications of logic. Nonmonotonic reasoning concerns situations when information is inc om-plete or uncertain. Thus, conclusions drawn lack ironclad certaint y that comes with classical logic reasoning.
Conditional information is an integral part of representation and inference processes of causal relationships, temporal events, and even the deliberation about impossible scenarios of cognitive agents. For formalizing these inferences, a proper formal representation is needed. Psychological studies indicate that classical, monotonic logic is not the approriate model for capturing human reasoning: There are cases where the participants systematically deviate from classically valid answers, while in other cases they even endorse logically invalid ones. Many analyses covered the independent analysis of individual inference rules applied by human reasoners. In this paper we define inference patterns as a formalization of the joint usage or avoidance of these rules. Considering patterns instead of single inferences opens the way for categorizing inference studies with regard to their qualitative results. We apply plausibility relations which provide basic formal models for many theories of conditionals, nonmonotonic reasoning, and belief revision to asses the rationality of the patterns and thus the individual inferences drawn in the study. By this replacement of classical logic with formalisms most suitable for conditionals, we shift the basis of judging rationality from compatibility with classical entailment to consistency in a logic of conditionals. Using inductive reasoning on the plausibility relations we reverse engineer conditional knowledge bases as explanatory model for and formalization of the background knowledge of the participants. In this way the conditional knowledge bases derived from the inference patterns provide an explanation for the outcome of the study that generated the inference pattern.