Moreover, the undersigned agrees to cooperate in any claim or other action seeking to protect or enforce any right the undersigned has granted to AAAI in the article/paper. If any such claim or action fails because of facts that constitute a breach of any of the foregoing warranties, the undersigned agrees to reimburse whomever brings such claim or action for expenses and attorneys' fees incurred therein. The foregoing right shall not permit the posting of the article/paper in electronic or digital form on any computer network, except by the author or the author's employer, and then only on the author's or the employer's own web page or ftp site.
This is the first presidential address of AAAI, the American Association for Artificial Intelligence. In the grand scheme of history of artificial intelligence (AI), this is surely a minor event. The field this scientific society represents has been thriving for quite some time. No doubt the society itself will make solid contributions to the health of our field. But it is too much to expect a presidential address to have a major impact.
Two closely related aspects of artificial intelligence that have received comparatively little attention in the recent literature are research methodology, and the analysis of computational techniques that span multiple application areas. We believe both issues to be increasingly significant as Artificial Intelligence matures into a science and spins off major application efforts. It is imperative to analyze the repertoire of AI methods with respect to past experience, utility in new domains, extensibility, and functional equivalence with other techniques, if AI is to become more effective in building upon prior results rather than continually reinventing the proverbial wheel. Similarly, awareness of research methodology issues can help plan future research buy learning from past successes and failures. We view the study of research methodology to be similar to the analysis of operational AI techniques, but at a meta-level; that is, research methodology analyzes the techniques and methods used by the researchers themselves, rather than their programs, to resolve issues of selecting interesting and tractable problems to investigate, and of deciding how to proceed with their investigations.
A major strength of frame-based knowledge representation languages is their ability to provide the knowledge base designer with a concise and intuitively appealing means expression. The claim of intuitive appeal is based on the observation that the object -centered style of description provided by these languages often closely matches a designer's understanding of the domain being modeled and therefore lessens the burden of reformulation involved in developing a formal description. To be effective as a knowledge base development tool, a language needs to be supported by an implementation that facilitates creating, browsing, debugging, and editing the descriptions in the knowledge base. We have focused on providing such support in a SmallTalk (Ingalls, 1978) implementation of the KL-ONE knowledge representation language (Brachman, 1978), called KloneTalk, that has been in use by several projects for over a year at Xerox PARC. In this note, we describe those features of KloneTalk's displaybased interface that have made it an effective knowledge base development tool, including the use of constraints to automatically determine descriptions of newly created data base items.
Our group's work in medical decision making has led us to formulate a framework for expert system design, in particular about how the domain knowledge may be decomposed into substructures. We propose that there exist different problem-solving types, i.e., uses of knowledge, and corresponding to each is a separate substructure specializing in that type of problem-solving. Each substructure is in turn further decomposed into a hierarchy of specialist which differ from each other not in the type of problem-solving, but in the conceptual content of their knowledge; e.g.; one of them may specialize in "heart disease," while another may do so in "liver," though both of them are doing the same type of problem solving. Thus ultimately all the knowledge in the system is distributed among problem-solvers which know how to use that knowledge. This is in contrast to the currently dominant expert system paradigm which proposes a common knowledge base accessed by knowledge-free problem-solvers of various kinds. In our framework there is no distinction between knowledge bases and problem-solvers: each knowledge source is a problem-solver.
Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1.) it implements a novel generic architecture for distributed problems solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2.) it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and focus-of attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving.
The AAAI President's address (the pervious article by Nils Nilsson) presents an eloquent argument for a particular AI paradigm that may be summarized by what Nils calls the "propositional doctrine:" AI is the study of how to acquire and represent knowledge within a logic-like propositional formalism, and the study of how to manipulate this knowledge by use of logical operations and the rule of inference. Although we concur with many of Nil's other assertions, this propositional doctrine seems far to extreme: a lot of interesting and important AI research is done outside of the logic-and theorem- proving paradigm. Indeed, the view that other lines of inquiry serve only to produce tools that may be procedurally attached to an AI (logic-and-theorem-proving) architecture seems a kind of Logic Imperialism to those of us they wish to relegate to working in the procedure factories. This paper, therefore, constitutes an initial salvo over(into?) We will focus on two central questions in this rebuttal: What is an appropriate research paradigm for AI?
Schlumberger is a large, multinational corporation concerned primarily with the measurement, collection, and interpretation of data. For the past fifty years, most of the activities have been related to hydrocarbon exploration. The efficient location and production of hydrocarbons from an underground formation requires a great deal of knowledge about the formation, ranging in scale from the size and shape of the rock's pore spaces to the size and shape of the entire reservoir. Schlumberger provides its clients with two types of information: measurements, called logs, of the petrophysical properties of the rock around the borehole, such as its electrical, acoustical, and radioactive characteristics; and in terpretations of these logs in terms of geophysical properties such as porosity and mineral composition. Since log interpretation is expert skill, the emergence of expert systems technology prompted Schlumberger's initial interest in Artificial Intelligence.
In the past twenty years, much time, effort, and money has been expended on designing an unambiguous representation of natural language to make them accessible to computer processing, These efforts have centered around creating schemata designed to parallel logical relations with relations expressed by the syntax and semantics of natural languages, which are clearly cumbersome and ambiguous in their function as vehicles for the transmission of logical data. Understandably, there is a widespread belief that natural languages are unsuitable for the transmission of many ideas that artificial languages can render with great precision and mathematical rigor. But this dichotomy, which has served as a premise underlying much work in the areas of linguistics and artificial intelligence, is a false one. There is at least one language, Sanskrit, which for the duration of almost 1000 years was a living spoken language with a considerable literature of its own. Besides works of literary value, there was a long philosophical and grammatical tradition that has continued to exist with undiminished vigor until the present century.
The TRW Defense Systems Group develops large man-machine networks that solve problems for government agencies. Until a few years ago these networks were either tightly-coupled humans loosely supported by machines -- like our ballistic missile system engineering organization, which provides technical advice to the Air Force, or tightly-coupled machines loosely controlled by humans- like the ground station for the NASA Tracking and Data Relay Satellite System. Because we have been producing first-of- a kind systems like these since the early 1950s, we consider ourselves leaders in the social art of assembling effective teams of diverse experts, and in the engineering art of conceiving and developing networks of interacting machines. But in the mid-1970s we began building systems in which humans and machines must be tightly coupled to each other-systems like the Sensor Data Fusion Center. Then we found that our well-worked system development techniques did not completely apply, and that our system engineering handbook needed a new chapter on communication between people and machines.