Doyle, J.


Two theses of knowledge representation: Language restrictions, taxonomic classification, and the utility of representation services

Classics

Levesque and Brachman argue that in order to provide timely and correct responses in the most critical applications, general-purpose knowledge representation systems should restrict their languages by omitting constructs which require nonpolynomial worst-case response times for sound and complete classification. They also separate terminological and assertional knowledge, and restrict classification to purely terminological information. We argue that logical soundness, completeness, and worst-case complexity are inadequate measures for evaluating the utility of representation services, and that this evaluation should employ the broader notions of utility and rationality found in decision theory. We suggest that general-purpose representation services should provide fully expressive languages, classification over relevant contingent information, "approximate" forms of classification involving defaults, and rational management of inference tools.


Non-monotonic logic I

Classics

'Non-monotonic' logical systems are logics in which the introduction of new axioms can invalidate old theorems. Such logics are very important in modeling the benefits of active processes which, acting in the presence of incomplete information, must make and subsequently revise assumptions in light of new observations. We present the motivation and history of such logics. We develop model and proof theories, a proof procedure, and applications for one non-monotonic logic. In particular, we prove the completeness of the non-monotoic predicate calculus and the decidability of the non-monotonic sentential calculus. We also discuss characteristic properties of this logic and its relationship to stronger logics, logics of incomplete information, and truth maintenance systems.Artificial Intelligence 13:41-72


A truth maintenance system

Classics

To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their beliefs when discoveries contradict these assumptions. The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of program actions and in guiding the course of action of a problem solver. This paper describes (1) the representations and structure of the TMS, (2) the mechanisms used to revise the current set of beliefs, (3) how dependency-directed backtracking changes the current set of assumptions, (4) techniques for summarizing explanations of beliefs, (5) how to organize problem solvers into "dialectically arguing" modules, (6) how to revise models of the belief systems of others, and (7) methods for embedding control structures in patterns of assumptions. We stress the need of problem solvers to choose between alternative systems of beliefs, and outline a mechanism by which a problem solver can employ rules guiding choices of what to believe, what to want, and what to do.Artificial Intelligence 12(3):231-272