When Logical Conclusions Do Not Hold True. Inference rules are called nonmonotonic when they allow intelligent systems "to augment their beliefs by new ones that do not logically follow from their explicit ones" and this or another inference may have to be retracted.
Ordinary inference rules are monotonic "because the set of theorems derivable from premises is not reduced by adding to the premises."
– from Logical foundations of artificial intelligence by MR Genesereth and NJ Nilsson (1987)
Koutras, Costas D. (American University of the Middle East) | Liaskos, Konstantinos (American University of the Middle East) | Moyzes, Christos (University of Liverpool) | Rantsoudis, Christos (Institut de Recherche en Informatique de Toulouse)
A default consequence relation α|~β (if α, then normally β) can be naturally interpreted via a `most' generalized quantifier: α|~β is valid iff in `most' α-worlds, β is also true. We define various semantic incarnations of this principle which attempt to make the set of (α ∧ β)-worlds `large' and the set of (α ∧ ¬ β)-worlds `small'. The straightforward implementation of this idea on finite sets is via `clear majority'. We proceed to examine different `majority' interpretations of normality which are defined upon notions of classical mathematics which formalize aspects of `size'. We define default consequence using the notion of asymptotic density from analytic number theory. Asymptotic density provides a way to measure the size of integer sequences in a way much more fine-grained and accurate than set cardinality. Further on, in a topological setting, we identify `large' sets with dense sets and `negligibly small' sets with nowhere dense sets. Finally, we define default consequence via the concept of measure, classically developed in mathematical analysis for capturing `size' through a generalization of the notions of length, area and volume. The logics defined via asymptotic density and measure are weaker than the KLM system P, the so-called `conservative core' of nonmonotonic reasoning, and they resemble to probabilistic consequence. Topology goes a longer way towards system P but it misses Cautious Monotony (CM) and AND. Our results show that a `size'-oriented interpretation of default reasoning is context-sensitive and in `most' cases it departs from the preferential approach.
We explore the relationships between causal rules and counterfactuals, as well as their relative representation capabilities, in the logical framework of the causal calculus. It will be shown that, though counterfactuals are readily definable on the basis of causal rules, the reverse reduction is achievable only up to a certain logical threshold (basic equivalence). As a result, we will argue that counterfactuals cannot distinguish causal theories that justify different claims of actual causation, which could be seen as the main source of the problem of `structural equivalents' in counterfactual approaches to causation. This will lead us to a general conclusion about the primary role of causal rules in representing causation.
Linear Logic and Defeasible Logic have been adopted to formalise different features relevant to agents: consumption of resources, and reasoning with exceptions. We propose a framework to combine sub-structural features, corresponding to the consumption of resources, with defeasibility aspects, and we discuss the design choices for the framework.
We propose analyzing conditional reasoning by appeal to a notion of intervention on a simulation program, formalizing and subsuming a number of approaches to conditional thinking in the recent AI literature. Our main results include a series of axiomatizations, allowing comparison between this framework and existing frameworks (normality-ordering models, causal structural equation models), and a complexity result establishing NP-completeness of the satisfiability problem. Perhaps surprisingly, some of the basic logical principles common to all existing approaches are invalidated in our causal simulation approach. We suggest that this additional flexibility is important in modeling some intuitive examples.
Conditional information is an integral part of representation and inference processes of causal relationships, temporal events, and even the deliberation about impossible scenarios of cognitive agents. For formalizing these inferences, a proper formal representation is needed. Psychological studies indicate that classical, monotonic logic is not the approriate model for capturing human reasoning: There are cases where the participants systematically deviate from classically valid answers, while in other cases they even endorse logically invalid ones. Many analyses covered the independent analysis of individual inference rules applied by human reasoners. In this paper we define inference patterns as a formalization of the joint usage or avoidance of these rules. Considering patterns instead of single inferences opens the way for categorizing inference studies with regard to their qualitative results. We apply plausibility relations which provide basic formal models for many theories of conditionals, nonmonotonic reasoning, and belief revision to asses the rationality of the patterns and thus the individual inferences drawn in the study. By this replacement of classical logic with formalisms most suitable for conditionals, we shift the basis of judging rationality from compatibility with classical entailment to consistency in a logic of conditionals. Using inductive reasoning on the plausibility relations we reverse engineer conditional knowledge bases as explanatory model for and formalization of the background knowledge of the participants. In this way the conditional knowledge bases derived from the inference patterns provide an explanation for the outcome of the study that generated the inference pattern.
The Sixth International Workshop on Nonmonotonic Reasoning was held 10 to 12 June 1996 in Timberline, Oregon. The aim of the workshop was to bring together active researchers interested in nonmonotonic reasoning to discuss current research, results, and problems of both a theoretical and a practical nature. The aim of the workshop was to bring together active researchers interested in nonmonotonic reasoning to discuss current research, results, and problems of both a theoretical and a practical nature. The authors of the technical papers accepted for the workshop represented 10 countries: Austria, Brazil, Canada, France, Germany, Israel, Italy, the Netherlands, the United States, and Venezuela. The papers described new work on default logic; circumscription; modal nonmonotonic logics; logic programming; abduction; the frame problem; and other subjects, including qualitative probabilities.
The contributions to this workshop indicate substantial advances in the technical foundations of the field. They also show that it is time to evaluate the existing approaches to commonsense reasoning problems. The Second International Workshop on Nonmonotonic Reasoning was held from 12-16 June 1988 in Grassau, a small village near Lake Chiemsee in southern Germany. It was jointly organized by Johan de Kleer, Matthew Ginsberg, Erik Sandewall, and myself. Financial support for the workshop came from the American Association for Artificial Intelligence (AAAI), Deutsche Forschungsgemeinschaft (DFG), The European Communities (Project Cost-13), Linköping University, and SIEMENS AG.
The workshop was sponsored by the American Association for Artificial Intelligence, Compulog, Associazione Italiana per l'Intelligenza Artificiale, and the Prolog Development Center. This year's workshop, organized by Gerhard Brewka and Ilkka Niemela (local chair: Enrico Giunchiglia, honorary chair: Ray Reiter), was different from earlier workshops in this series in an important aspect: It consisted of several specialized tracks, held partially in parallel, embedded in a plenary program that comprised invited talks and a panel. The following five tracks were organized: (1) Formal Aspects and Applications of Nonmonotonic Reasoning (cochairs: Jim Delgrande, Mirek Truszczynski), (2) Computational Aspects of Nonmonotonic Reasoning (cochairs: Niemela, Torsten Schaub), (3) Logic Programming (cochairs: Jürgen Dix, Jorge Lobo), (4) Action and Causality (cochairs: Vladimir Lifschitz, Hector Geffner), and (5) Belief Revision (cochairs: Hans Rott, Mary-Anne Williams). Both the new format and the scheduling of the workshop in conjunction with the KR Conference proved to be highly fruitful. The Seventh International Workshop on Nonmonotonic Reasoning was held in Trento, Italy, on 30 May to 1 June 1998 in conjunction with the Sixth International Conference on the Principles of Knowledge Representation and Reasoning (KR'98).
Decision theory and nonmonotonic logics are formalisms that can be employed to represent and solve problems of planning under uncertainty. We analyze the usefulness of these two approaches by establishing a simple correspondence between the two formalisms. The analysis indicates that planning using nonmonotonic logic comprises two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of preference for planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of nonmonotonic reasoning: (1) decision theory and nonmonotonic logics are intended to solve different components of the planning problem; (2) when considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical (monotonic) logic; and (3) because certain nonmonotonic programming paradigms (for example, frame-based inheritance, nonmonotonic logics) are inherently problem specific, they might be inappropriate for use in solving certain types of planning problems. We discuss how these conclusions affect several current AI research issues.