Information Technology
R1 Revisited: Four Years in the Trenches
Bachant, Judith, McDermott, John
In 1980, Digital Equipment Corporation began to use a rule-based system called R1 by some and XCON by others to configure VAX-11 computer systems. In the intervening years, R1's knowledge has increased substantially and its usefulness to Digital continues to grow. This article describes what is involved in extending R1's performance during the four year period.
Artificial Intelligence in Transition
In the past fifteen years artificial intelligence has changed from being the preoccupation of a handful of scientists to a thriving enterprise that has captured the imagination of world leaders and ordinary citizens alike. While corporate and government officials organize new projects whose potential impact is widespread, to date few people have been more affected by the transition than those already in the field. I review here some aspects of this transition, and pose some issues that it raises for AI researchers, developers, and leaders.
On the Development of Commercial Expert Systems
We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert system. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert systems. Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development.
Review of A Mathematical Theory of Evidence
It may be argued that this, in principle, is a more realistic approach because it addresses, rather than finesses, the problem of incomplete information in the knowledge base. On the other hand, the Dempster-Shafer theory provides a basis-at least at present-for only a small subset of the rules of combination which are needed for inferencing in expert systems. In particular, the theory does not address the issue of chaining, nor does it come to grips with the fuzziness of probabilities and certainty factors. Thus, although the theory is certainly a step in the right direction, for it provides a framework for dealing with granular data, it does require a great deal of further development to become a broadly useful tool for the management of uncertainty in expert systems. Although not easy to understand, Shafer's book contains a wealth of significant results, and is a must for anyone who wants to do serious research on problems relating to the rules of combination of evidence in expert systems. Indeed, there is no doubt that, in the years to come, the Dempster-Shafer theory and its extensions will become an integral part of the theory of such systems and will certainly occupy an important place in knowledge engineering and related fields.
Applications Development Using a Hybrid Artificial Intelligence Development System
Kunz, John C., Kehler, Thomas P., Williams, Michael D.
This article describes our initial experience with building applications programs in a hybrid AI tool environment. Traditional AI systems developments have emphasized a single methodology, such as frames, rules or logic programming, as a methodology that is natural, efficient, and uniform. The applications we have developed suggest that natural-ness, efficiency and flexibility are all increased by trading uniformity for the power that is provided by a small set of appropriate programming and representation tools. The tools we use are based on five major AI methodologies: frame-based knowledge representation with inheritance, rule-based reasoning, LISP, interactive graphics, and active values. Object-oriented computing provides a principle for unifying these different methodologies within a single system.
Probability Concepts for an Expert System Used for Data Fusion
Probability concepts for ruled-based expert systems are developed that are compatible with probability used in data fusion of imprecise information. Procedures for treating probabilistic evidence are presented, which include the effects of statistical dependence. Confidence limits are defined as being proportional to root-mean-square errors in estimates, and a method is outlined that allows the confidence limits in the probability estimate of the hypothesis to be expressed in terms of the confidence limits in the estimate of the evidence. Procedures are outlined for weighting and combining multiple reports that pertain to the same item of evidence. The illustrative examples apply to tactical data fusion, but the same probability procedures can be applied to other expert systems.
Artificial Intelligence Research at NASA Langley Research Center (Research in Progress)
Orlando, Nancy, Abbott, Kathy, Rogers, James
Research in the field of artificial intelligence is developing rapidly at the various NASA centers, including Langley research Center in Hampton, Virginia. AI studies at Langley involve research for application in aircraft flight management, remote space teleoperators and robots, and structural optimization.
Artificial Intelligence Research at GTE Laboratories (Research in Progress)
GTE Laboratories is the central corporate research and development facility for the sixty subsidiaries of the worldwide GTE corporation. Located in the Massachusetts Route 128 high technology area, the five laboratories that comprise GTE Laboratories generate the ideas, products, systems, and services that provide technical leadership for GTE. The two laboratories which conduct artificial intelligence research are the Computer Science Laboratory (CSL) and the Fundamental Research Laboratory (FRL). Artificial Intelligence projects within the CSL are directed towards the research techniques used in expert systems, and their application to GTE products and services. AI projects within FRL have longer-term AI research goals.
Introduction to the COMTEX Microfiche Edition of Reports on Artificial Intelligence from Carnegie-Mellon University
Originally it was Complex Information Processing. That was the name Herb Simon and I chose in 1956 to describe the area in which we are working. It didn't take long before it became Artificial Intelligence (AI). Coined by John McCarthy, that term has stuck firmly, despite continual grumblings that any other name would be twice as fair (though no grumblings by me; I like the present name). Complex Information processing lives on now only in the title of the CIP Working Papers, a series started by Herb Simon in 1956 and still accumulating entries (to 447). However, from about 1965 much of the work on artificial intelligence that was not related to psychology began to appear in technical reports of the Computer Science Department. These reports, never part of a coherent numbered series until 1978, proliferated in all directions. Starting in the early 1970s (on one can recall exactly when), they did become the subject of a general mailing and thus began to form what everyone thinks of as the CMU Computer Science Technical Reports.
We Need Better Standards for Artificial Intelligence Research: President's Message
The state of the art in any science includes the criteria for evaluating research. Like every other aspect of the science, it An example is the alpha-beta heuristic for game playing. The criteria for evaluating AI research Humans use it, but it wasn't identified by the writers of the are not in very good shape. I had intended to produce four first chess programs. It doesn't constitute a game playing presidential messages during my term but have managed only program, but it seems clearly necessary, because without two, because this one has proved so difficult to write.