Role-limiting approaches support knowledge acquisition (KA) by centering knowledge base construction on common types of tasks or domainindependent problem-solving strategies. Within a particular problem-solving strategy, domaindependent knowledge plays specific roles. A KA tool then helps a user to fill these roles. Although role-limiting approaches are useful for guiding KA, they are limited because they only support users in filling knowledge roles that have been built in by the designers of the KA system. EXPECT takes a different approach to KA by representing problem-solving knowledge explicitly, and deriving from the current knowledge base the knowledge gaps that must be resolved by the user during KA. This paper contrasts role-limiting approaches and EXPECT's approach, using the propose-and-revise strategy as an example. EXPECT not only supports users in filling knowledge roles, but also provides support in making other modifications to the knowledge base, including adapting the problem-solving strategy. EXPECT's guidance changes as the knowledge base changes, providing a more flexible approach to knowledge acquisition. This work provides evidence supporting the need for explicit representations in building knowledge-based systems.
To successfully attack the large scale, real world domains targeted by the ARPA/Rome Labs Planning Initiative collaboration is required. People and machines must work together to solve problems, each contributing what they do best. In addition to planning systems, other computerized tools are needed to support that collaboration--such as tools for evaluating and critiquing plans. In this paper we describe the EXPECT knowledge acquisition framework which we have used to construct a plan evaluation tool, the COA Evaluator. This application evaluates alternative military transportation plans for moving personnel and materiel from bases to crisis situations.
The knowledge about a task is not a collection of isolated information packets but rather a carefully constructed web of facts, data, and procedures. The details of how knowledge is organized and how it interacts may be unknown to the user and often hard to keep track of. The key to knowledge acquisition is thus not in supporting the addition of more items to the collection, but in ensuring harmonious interactions between new and existing knowledge and preventing redundancies, inconsistencies, and knowledge gaps that may arise inadvertently. Most tools for knowledge acquisition achieve this by having expectations about how each piece of knowledge fits in the overall system (Marcus & McDermott 1989; Musen 1989; Kahn, Nowlan, & McDermott 1985). For example, systems for classification tasks need knowledge for mapping inputs into classes.
Over the past decade, it has become clear that one should go beyond the level of formalisms and programming constructs to understand and analyze expert systems. I discuss the idea of inference structures such as heuristic classification (Clancey 1985), the distinction between deep and surface knowledge (Steels 1984), the notion of problem-solving methods and domain knowledge filling roles required by the methods (McDermott 1988), and the idea of generic tasks and task-specific architectures (Chandrasekaran 1983). Such a synthesis is presented here in the form of a componential framework. The framework stresses modularity and consideration of the pragmatic constraints of the domain. A major question with knowledge engineering is (or should be) that given a particular task, how do we go about solving it using expert system techniques.
This article discusses frameworks for studying expertise at the knowledge level and knowledge-use level. It reviews existing approaches such as inference structures, the distinction between deep and surface knowledge, problem-solving methods, and generic tasks. A new synthesis is put forward in the form of a componential framework that stresses modularity and an analysis of the pragmatic constraints on the task. The analysis of a rule from an existing expert system (the Dipmeter Advisor) is used to illustrate the framework.