DeJong, G.

Explanation-based learning: An alternative view


In the last issue of this journal Mitchell, Keller, and Kedar-Cabelli presented a unifying framework for the explanation-based approach to machine learning. While it works well for a number of systems, the framework does not adequately capture certain aspects of the systems under development by the explanation-based learning group at Illinois. The primary inadequacies arise in the treatment of concept operationality, organization of knowledge into schemata, and learning from observation. This paper outlines six specific problems with the previously proposed framework and presents an alternative generalization method to perform explanation-based learning of new concepts.

Purposive Understanding


Conceptual Dependency began to rely more on underlying primitives for the representation of the similarities in meaning that transcend the particular words of a language (Schank, 1975). We built an inference program (Schank and Rieger, 1974) that exploited the properties of the primitive concepts uncovered by the parser and derived new information from them. The end product of such an inference procedure was a connected causal chain of events that represented the implicit and explicit information in a text (Schank, 1974). At this point we began to program a computer understanding system that would attempt to process input texts.