This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner.
Probabilistic conceptual network is a knowledge representation scheme designed for reasoning about concepts and categorical abstractions in utility-based categorization. The scheme combines the formalisms of abstraction and inheritance hierarchies from artificial intelligence, and probabilistic networks from decision analysis. It provides a common framework for representing conceptual knowledge, hierarchical knowledge, and uncertainty. It facilitates dynamic construction of categorization decision models at varying levels of abstraction. The scheme is applied to an automated machining problem for reasoning about the state of the machine at varying levels of abstraction in support of actions for maintaining competitiveness of the plant.
This supposedly makes representing exceptions (three-legged elephants and the like) easy; but, alas, it makes one crucial type of representation impossiblethat of composite descriptions whose meanings are functions of the structure and interrelation of their parts. This article explores this and other ramifications of the emphasis on default properties and "typical" objects. While I believe this to be an important point, this article was never meant to be the definitive work on logical distinctions in knowledge representation. Some of the notions mentioned here in passing (e.g., analyticity) are perenially problematic. In addition, I have not really attempted to bring the body of the article up to date from its original form. The article is also generally nonconstructive. However, there is now ample evidence that this kind of analysis can lead to constructive suggestions for knowledge representation systems. In work pursued after the original version of this article was written, some ...
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic 'discovery' of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the long-awaited goal of a meaning algebra.
Typing schemes which allow inheritance from super to sub types are a common way of representing information about the world. There are various systems and theories which use such representations plus some inferencing rules to deduce properties of objects, about which the system has only partial information. Many such systems have problems related to multiple inheritance, and have some difficulty in drawing conclusions which we as humans see as intuitively simple. We present a model of typing based on a lattice of feature descriptors. A type is represented by two important points in the lattice representing core and default information.