Hence, at a coarse-grained level of abstraction, KB-Ss can be characterized in terms of two components: (1) a knowledge base, encoding the knowledge embodied by the system, and (2) a reasoning engine, which is able to query the knowledge base, infer or acquire knowledge from external sources, and add new knowledge to the knowledge base. A knowledge-level account of a KBS (that is, a competencecentered, implementation-independent description of a system), such as Clancey's (1985) analysis of first-generation rule-based systems, focuses on the task-centered competence of the system; that is, it addresses issues such as what kind of problems the KBS is designed to tackle, what reasoning methods it uses, and what knowledge it requires. In contrast with task-centered analyses, Levesque and Lakemeyer focus on the competence of the knowledge base rather than that of the whole system. Hence, their notion of competence is a task-independent one: It is the "abstract state of knowledge" (p. This is an interesting assumption, which the "proceduralists" in the AI community might object to: According to the procedural viewpoint of knowledge representation, the knowledge modeled in an application, its representation, and the associated knowledge-retrieval mechanisms have to be engineered as As a result, they would argue, it is not possible to discuss the knowledge of a system independently of the task context in which the system is meant to operate.
The 1994 Workshop on Computational Dialectics was held during the 1994 National Conference on AI. At issue were the following ideas: (1) dialectic has something to do with computation, (2) AI has something new to contribute to the understanding of dialectic, and (3) dialectic approaches to longstanding AI problems permit new progress. This article outlines the significant research presented at the conference. It is to his credit that the idea has earned a further hearing in a German workshop that he is currently organizing with colleagues on the continent. For the workshop in Seattle, Washington, at AAAI-94, our committee included Johanna Moore (Pitt) and Katia Sycara (Carnegie Mellon University).
If you are interested in writing a review, contact chandra@cis. It is intended to be a "general textbook of knowledge-base analysis and design" (p. Its great strength is recognizing the need for an interdisciplinary approach, and the attempt at presenting the logical and philosophical foundations of knowledge representation under a unified view. Its great weakness is a lack of consistent rigor, which is needed in a textbook for newcomers to a subject. After some historical remarks and a first introductory chapter devoted to logic, Sowa immediately attacks the hard problems involved in choosing ontological categories, which lie at the heart of any knowledge representation project.
Conceptual Spaces--The Geometry of Thought is a book by Peter Gärdenfors, professor of cognitive science at Lund University, Sweden. Gärdenfors has authored another book in this series (based on work with Carlos Alchourron and David Makinson), Knowledge in Flux, a definitive account of the widely examined AGM (after Alchourron, Gärdenfors, and Makinson) theory of belief revision. The AGM theory is firmly based on classical logic and its model theory, and by his founding participation in developing it, Gärdenfors has earned the right to critique knowledge representation. His new book is not primarily about logic, but it is certainly not an apostasy either. If I may be permitted a minor irreverence, I would say that this book came not to destroy logic but to fulfill.
The idea is that although an AI system without the frame problem might, say, read an echocardiogram and diagnose a heart defect, a really smart autonomous robot will arrive only if, like us humans, it can handle the frame problem. The highlight … is an entertaining go-round between two pugilists trading blows in civil but gloves-off style, reminiscent of a net discussion. We're still confronted by a difficult question: Is there a solution to it? If not, then R2D2 might forever be but a creature of fiction. If, however, the frame problem is solvable, we must confront yet another question: Is there a general solution to the frame problem, or is the best that can be mustered a so-called domain-dependent solution?
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion. Although the word context is frequently used in descriptions, explanations, and analyses of computer programs in these areas, its meaning is frequently left to the reader's understanding; that is, it is used in an implicit and intuitive manner. An example of how contexts may help in AI is found in McCarthy's (constructive) criticism (McCarthy 1984) of I wish honorable gentlemen would have the fairness to give the entire context of what I did say, and not pick out detached words (R. Cobden , quoted in Oxford English Dictionary , p. 902). The main motivation for studying formal contexts is to resolve the problem of generality in AI.
The origins of the BDI model lie in the theory of human practical reasoning developed by the philosopher Michael Bratman (1987) in the mid-1980s. AI Magazine Volume 22 Number 4 (2001) ( AAAI) Among the three strands of BDI research, Reasoning about Rational Agents focuses almost exclusively on the logical foundations of the theory. "If you've heard about A-life but aren't quite sure what it is or where it's going, Grand's book is an excellent place to enter one of the more exciting areas of twenty-firstcentury science." Steve Grand showed us how to build a universe of evolving creatures, without the prevailing academic biases.This delightful book is a fresh and inspiring account of how to succeed in creating artificial life." Nevertheless, it continues to be an important, thriving research area that helps to crystallize some of the deepest questions in the field.
The Center for Automation and Intelligent Systems Research at Case Western Reserve University, founded in 1984, provides the setting and the administrative and funding mechanisms for coordinating and focusing the capabilities of faculty members and students from many disciplines and departments to deal with significant realworld problems encountered in the automation of production. The center serves as an interface between separate basic research efforts in the various disciplines and academic departments and the multidisciplinary group efforts needed to deal effectively with nontrivial real problems. The main focus of research at the center is on the effective integration of computer-based technologies that appear to be essential for the factory of the future. Thus, the scope of activities at the center is somewhat broader based than AI, but AI plays a central role in this integration because effective integration of complex systems requires intelligence. The major emphasis at the center is on industrial applications.
This is the first presidential address of AAAI, the American Association for Artificial Intelligence. In the grand scheme of history, even the history of artificial intelligence (AI), this is surely a minor event. The field this scientific society represents has been thriving for quite some time. No doubt the society itself will make solid contributions to the health of our field But it is too much to expect a presidential address to have a major impact. So what is the role of the presidential address and what is the significance of the first one? 1 believe its role is to set a tone, to provide an emphasis.
We give an overview of the multifaceted relationship between nonmonotonic logics and preferences. We discuss how the nonmonotonicity of reasoning itself is closely tied to preferences reasoners have on models of the world or, as we often say here, possible belief sets. Selecting extended logic programming with answer-set semantics as a generic nonmonotonic logic, we show how that logic defines preferred belief sets and how preferred belief sets allow us to represent and interpret normative statements. Conflicts among program rules (more generally, defaults) give rise to alternative preferred belief sets. We discuss how such conflicts can be resolved based on implicit specificity or on explicit rankings of defaults.