Role-limiting approaches support knowledge acquisition (KA) by centering knowledge base construction on common types of tasks or domainindependent problem-solving strategies. Within a particular problem-solving strategy, domaindependent knowledge plays specific roles. A KA tool then helps a user to fill these roles. Although role-limiting approaches are useful for guiding KA, they are limited because they only support users in filling knowledge roles that have been built in by the designers of the KA system. EXPECT takes a different approach to KA by representing problem-solving knowledge explicitly, and deriving from the current knowledge base the knowledge gaps that must be resolved by the user during KA. This paper contrasts role-limiting approaches and EXPECT's approach, using the propose-and-revise strategy as an example. EXPECT not only supports users in filling knowledge roles, but also provides support in making other modifications to the knowledge base, including adapting the problem-solving strategy. EXPECT's guidance changes as the knowledge base changes, providing a more flexible approach to knowledge acquisition. This work provides evidence supporting the need for explicit representations in building knowledge-based systems.
To successfully attack the large scale, real world domains targeted by the ARPA/Rome Labs Planning Initiative collaboration is required. People and machines must work together to solve problems, each contributing what they do best. In addition to planning systems, other computerized tools are needed to support that collaboration--such as tools for evaluating and critiquing plans. In this paper we describe the EXPECT knowledge acquisition framework which we have used to construct a plan evaluation tool, the COA Evaluator. This application evaluates alternative military transportation plans for moving personnel and materiel from bases to crisis situations.
The knowledge about a task is not a collection of isolated information packets but rather a carefully constructed web of facts, data, and procedures. The details of how knowledge is organized and how it interacts may be unknown to the user and often hard to keep track of. The key to knowledge acquisition is thus not in supporting the addition of more items to the collection, but in ensuring harmonious interactions between new and existing knowledge and preventing redundancies, inconsistencies, and knowledge gaps that may arise inadvertently. Most tools for knowledge acquisition achieve this by having expectations about how each piece of knowledge fits in the overall system (Marcus & McDermott 1989; Musen 1989; Kahn, Nowlan, & McDermott 1985). For example, systems for classification tasks need knowledge for mapping inputs into classes.
Successful approaches to developing knowledge acquisition tools use expectationsof whatthe user has to add or may want to add, based on how new knowledge fits within a knowledge base that already exists. When a knowledge base is first created or undergoes significant extensions and changes, these tools cannot provide much support. This paper presents an approach to creating expectations when a new knowledge base is built, and describes a knowledge acquisition tool that we implemented using this approach that supports users in creating problem-solving knowledge. As the knowledge base grows, the knowledge acquisition tool derives more frequent and more reliable expectations that result from enforcing constraints in the knowledge representation system, looking for missing pieces of knowledge in the knowledge base, and working out incrementally the interdependencies among the different components of the knowledge base. Our preliminary evaluations show a thirty percent time savings during knowledge acquisition. Moreover, by providing tools to support the initial phases of knowledge base development, many mistakes are detected early on and even avoided altogether. We believe that our approach contributes to improving the quality of the knowledgeacquisition process and of the resulting knowledge-basedsystemsaswell.
Over the past decade, it has become clear that one should go beyond the level of formalisms and programming constructs to understand and analyze expert systems. I discuss the idea of inference structures such as heuristic classification (Clancey 1985), the distinction between deep and surface knowledge (Steels 1984), the notion of problem-solving methods and domain knowledge filling roles required by the methods (McDermott 1988), and the idea of generic tasks and task-specific architectures (Chandrasekaran 1983). Such a synthesis is presented here in the form of a componential framework. The framework stresses modularity and consideration of the pragmatic constraints of the domain. A major question with knowledge engineering is (or should be) that given a particular task, how do we go about solving it using expert system techniques.