This new conference series promotes multidisciplinary research on tools and methodologies for efficiently capturing knowledge from a variety of sources and creating representations that can be (or eventually can be) useful for reasoning. The conference attracted researchers from diverse areas of AI, including knowledge representation, knowledge acquisition, intelligent user interfaces, problem solving and reasoning, planning, agents, text extraction, and machine learning. Knowledge acquisition has been a challenging area of research in AI, with its roots in early work to develop expert systems. Driven by the modern internet culture and knowledge-based industries, the study of knowledge capture has a renewed importance. Although there has been considerable work over the years in the area, activities have been distributed across several distinct research communities.
Thsi article is a slightly modified version of an invited address that was given at the Eighth IEEE Conference on Artificial Intelligence for Applications in Monterey, California, on 2 March 1992. It describes the lessons learned in developing and implementing the Artificial Intelligence Research and Development Program at the National Aeronautics and Space Administration (NASA). In so doing, the article provides a historical perspective of the program in terms of the stages it went through as it matured. These stages are similar to the "ages of artificial intelligence" that Pat Winston described a year before the NASA program was initiated. The final section of the article attempts to generalize some of the lessons learned during the first seven years of the NASA AI program into AI program management heuristics.
Artificial Intelligence for Microcomputers If you would like to develop an expert system or knowledgebased system on a microcomputer, you might want to read Artijcial Intelligence for Microcomputers by Mickey Williamson, This nontechnical book is easy to understand, written for the unsophisticated microcomputer user. The first chapters provide a brief history of artificial intelligence (AI) and an introduction to natural language query systems. They explain what knowledge-based systems and expert systems are and how they work. Discussions are also provided of the two major AI programming languages, Lisp and Prolog, including their strengths and weaknesses. The remainder of the book is devoted to a review of some of the existing AI software products for microcomputers, such as natural language query systems, decision support systems, expert system development shells, and AI programming languages.
Editor: On "Learning Language" I was dismayed by the inclusion of William Katke's article ("Learning Language Using A Pattern Recognition Approach," Spring 1985). Usually you do an excellent job of representing "the current state of the art in Artificial Intelligence" (to quote your Editorial Policy), but I consider this article an exception. First of all, although the article claims to be on "Learning Language," what it presents is at best a knowledge-free approach to learning syntax. I saw no evidence that the induced syntax is useful for anything, and good reasons to believe that it is not, such as the unmnemonic category names and the intrinsic limitations of finite state grammars. Second, this kind of stuff has been done before, and it didn't work too well then either; for a useful overview of the field and pointers into the literature, see the article on "Grammatical Inference" in Volume 3 of The Handbook of The plete specifications and the verification of proposed impleideas and issues presented were firmly focused on a conven-mentations, we should concentrate more on incremental tional view of the design process-a view I can caricaturize development of specifications as a result of assessment of as the SPIV methodology: performance.
The Electrical Systems Division at the NASA Kennedy Space Center has developed and deployed an agent-based tool to monitor the space shuttle's ground processing telemetry stream. The agent provides autonomous monitoring of the telemetry stream and automatically alerts system engineers when predefined criteria have been met. Efficiency and safety are improved through increased automation.
Organizations are adaptive systems that continually attempt to push the limits of their own effectiveness to approach perfection. This approach is true of the "mom and pop" store that is threatened by the growth of shopping malls. It is true of the gigantic corporation that is threatened by public regulation and private competition. It is particularly true of organizations that are confronted with complex tasks, the vagaries of uncertainty, and the high and visible costs of irreversible error. The cause of organization ineffectiveness or, indeed, failure is often perceived to be human frailty (Perrow 1984).
The annual Workshop on the Validation and Verification of Knowledge-Based Systems is the leading forum for presenting research on the validation and verification of knowledge-based systems (KBSs). The 1994 workshop was significant in that there was a definitive move in the philosophical position of the workshop from a testing-and toolbased approach to KBS evaluation to that of a formal specification-based approach. This workshop included 12 full papers and 5 short papers and was attended by 35 researchers from government, industry, and academia. The workshop is the leading forum for presenting research on the validation and verification of knowledge-based systems (KBSs). It has influenced the evolution of the discipline from its origins in 1988; at this time, researchers were asking the questions, How can we evaluate the correctness of KBS? How is this process different from conventional system evolution?
We are victims of one common superstitionthe superstition that we understand the changes that are daily taking place in the world because we read about them and know what they are. The anthropological stories and the concept of memes were brought to my attention several years ago by Lynn Conway Much of the vision and some of the material was drawn from a paper that we worked on together but never published. The important distinction between process and product, was made crisp for me by John Seely Brown, who also has encouraged and made possible projects like Trillium, which I watched with interest, and like Colab, in which I participated. Joshua Lederberg kindled my interest in biological issues and a respect for knowledge processes and their partial automation that has not faded Dan Bobrow listened to my ramblings on several runs, agonized over my confusions, helped to get the kinks out of the arguments, and suggested the title for the article Sanjay Mittal and I have spent many hours speculating together on the issues in building community knowledge bases and knowledge servers and in understanding the principles of knowledge competitions Austin Henderson helped me to understand the Trillium story and to report it accurately. Austin and Sanjay hounded me to say, more precisely, what a knowledge medium is Agustin Araya and Mark Miller participated in a Colab session in which we tried to jointly lay out these ideas, and together asked me to make the prescriptions clearer Ed Feigenbaum persuaded me to be more precise in the discussion of the limits of today's expert systems technology Thanks to Agustin Araya, Dan Bobrow, John Seely Brown, Lynn Conway, Bob Engelmore, Ed Feigenbaum, Felix Frayman, Gregg Foster, Austin Henderson, Ken Kahn, Mark Miller, Sanjay Mittal, Julian Orr, Allen Sears, Lucy Suchman, and Paul Wallich for reading early drafts of this paper and for helping to clarify the ideas and improve the article's readability Stephen Cross triggered the writing of this article when he invited me to give the keynote address at the Aerospace Applications of Artificial Intelligence Conference in Dayton, Ohio, in September 1985.
Hence, at a coarse-grained level of abstraction, KB-Ss can be characterized in terms of two components: (1) a knowledge base, encoding the knowledge embodied by the system, and (2) a reasoning engine, which is able to query the knowledge base, infer or acquire knowledge from external sources, and add new knowledge to the knowledge base. A knowledge-level account of a KBS (that is, a competencecentered, implementation-independent description of a system), such as Clancey's (1985) analysis of first-generation rule-based systems, focuses on the task-centered competence of the system; that is, it addresses issues such as what kind of problems the KBS is designed to tackle, what reasoning methods it uses, and what knowledge it requires. In contrast with task-centered analyses, Levesque and Lakemeyer focus on the competence of the knowledge base rather than that of the whole system. Hence, their notion of competence is a task-independent one: It is the "abstract state of knowledge" (p. This is an interesting assumption, which the "proceduralists" in the AI community might object to: According to the procedural viewpoint of knowledge representation, the knowledge modeled in an application, its representation, and the associated knowledge-retrieval mechanisms have to be engineered as As a result, they would argue, it is not possible to discuss the knowledge of a system independently of the task context in which the system is meant to operate.