The Fourth International Workshop on Nonmonotonic Reasoning brought together active researchers in nonmonotonic reasoning to discuss current research, results, and problems of both theoretical and practical natures. There was lively discussion on a number of issues, including future research directions for the field.
The contributions to this workshop indicate substantial advances in the technical foundations of the field. They also show that it is time to evaluate the existing approaches to commonsense reasoning problems. The Second International Workshop on Nonmonotonic Reasoning was held from 12-16 June 1988 in Grassau, a small village near Lake Chiemsee in southern Germany. It was jointly organized by Johan de Kleer, Matthew Ginsberg, Erik Sandewall, and myself. Financial support for the workshop came from the American Association for Artificial Intelligence (AAAI), Deutsche Forschungsgemeinschaft (DFG), The European Communities (Project Cost-13), Linköping University, and SIEMENS AG.
However, Perlis has shown that one of these formalisms, circumscription, is subject to certain counterintuitive limitations. Kraus and Perlis suggested a partial solution, but significant problems remain. In this paper, we observe that the unfortunate limitations of circumscription are even broader than Perlis originally pointed out. Moreover, these problems are not confined to circumscription; they appear to be endemic in current nonmonotonic reasoning formalisms. We develop a much more general solution than that of Kraus and Perlis, involving restricting the scope of nonmonotonic reasoning, and show that it remedies these problems in a variety of formalisms.
The challenge we address is to create autonomous, inductively learning agents that exploit and modify a knowledge base. Our general approach, embodied in a continuing research program (joint with Stuart Russell), is declarative bias, i.e., to use declarative knowledge to constrain the hypothesis space in inductive learning. In previous work, we have shown that many kinds of declarative bias can be relatively efficiently represented and derived from background knowledge. We begin by observing that the default, i.e., revisable, flavor of beliefs is crucial in applications, especially for competence to improve incrementally and for information to be acquired through communication, language, and sensory perception in integrated agents. We argue that much of learning in humans consists of "learning in the small" and is nothing more nor less than acquiring new plausible premise beliefs. Thus representation of defaults and plausible knowledge should be a central question for researchers aiming to design sophisticated learning agents that exploit a knowledge base. We show that such applications pose several representational requirements that are unfamiliar to most in the machine learning community, and whose combination has not been previously addressed by the knowledge representation community. These include: prioritization-type precedence between defaults; updating with new defaults, not just new for-sure beliefs; explicit reasoning about adoption of defaults and precedence between defaults; and integration of defaults with probabilistic and statistical beliefs. We show how, for the first time, to achieve all of these requiremetats, at least partially, in one declarative formalism: Defeasible Axiomatized Policy Circumscription, a generalized variant of circumscription.