There has been a good deal of discussion recently about the possibility of standardizing knowledge representation efforts, including the development of an interlingua, or knowledge interchange format (KIF), that would allow developers of declarative knowledge to share their results with other AI researchers. In this article, I examine the practicality of this idea. I present some philosophical arguments against it, describe a straw-man KIF, and suggest specific experiments that would help explore these issues.
Building new knowledge-based systems today usually entails constructing new knowledge bases from scratch. It could instead be done by assembling reusable components. System developers would then only need to worry about creating the specialized knowledge and reasoners new to the specific task of their system. This new system would interoperate with existing systems, using them to perform some of its reasoning. In this way, declarative knowledge, problem- solving techniques, and reasoning services could all be shared among systems. This approach would facilitate building bigger and better systems cheaply. The infrastructure to support such sharing and reuse would lead to greater ubiquity of these systems, potentially transforming the knowledge industry. This article presents a vision of the future in which knowledge-based system development and operation is facilitated by infrastructure and technology for knowledge sharing. It describes an initiative currently under way to develop these ideas and suggests steps that must be taken in the future to try to realize this vision.
Solving a design problem efficiently requires an adequate representation. AI researchers have identified knowledge types which structure information based on various needs. One of the major tasks of any design system developer is to identify all the pieces of knowledge used by the system and to map them onto the most appropriate representations provided by AI. A list of the knowledge and representations used by a system can be used in its characterization.
This is the first in a series of articles exploring knowledge representation in Artificial Intelligence from the perspective of a practical implementer and programmer. AI is now a collection of approaches that has seen practical commercial application in search, linguistics, reasoning and analytics in a wide variety of industries. How we represent knowledge in a computer affects how our applications perform, which algorithms we choose, and in fact whether the applications can be successful. Graphs have long been a popular mechanism in AI for encoding knowledge about the world. In their simplest form there are just nodes and arcs and each can have labels.
The Workshop on Term Subsumption Languages in Knowledge Representation was held 18-20 October 1989 at the Inn at Thorn Hill, located in the White Mountain region of New Hampshire. The workshop was organized by Peter F. Patel-Schneider of AT&T Bell Laboratories, Murray Hill, New Jersey; Marc Vilain of MITRE, Bedford, Massachusetts; Ramesh S. Patil of the Massachusetts Institute of Technology (MIT); and Bill Mark of the Lockheed AI Center, Menlo Park, California. Support was provided by the American Association for Artificial Intelligence and AT&T Bell Laboratories. This workshop was the latest in a series in this area. Previous workshops have had a slightly narrower focus, being explicitly concerned with KL-One, the first knowledge representation system based on a term subsumption language (TSL), or its successor, NIKL.