Goto

Collaborating Authors

 Technology


Artificial Intelligence Research at GTE Laboratories (Research in Progress)

AI Magazine

Located in the Massachusetts Route 128 high technology area, the five laboratories that comprise GTE Laboratories generate the ideas, products, systems, and services that provide technical leadership for GTE. The two laboratories which conduct artificial intelligence research are the Computer Science Laboratory (CSL) and the Fundamental Research Laboratory (FRL). Artificial Intelligence projects within the CSL are directed towards the research techniques used in expert systems, and their application to GTE products and services. AI projects within FRL have longer-term AI research goals.


Probability Concepts for an Expert System Used for Data Fusion

AI Magazine

Probability concepts for ruled-based expert systems are developed that are compatible with probability used in data fusion of imprecise information. Confidence limits are defined as being proportional to root-mean-square errors in estimates, and a method is outlined that allows the confidence limits in the probability estimate of the hypothesis to be expressed in terms of the confidence limits in the estimate of the evidence. Procedures are outlined for weighting and combining multiple reports that pertain to the same item of evidence. The illustrative examples apply to tactical data fusion, but the same probability procedures can be applied to other expert systems.


Applications Development Using a Hybrid Artificial Intelligence Development System

AI Magazine

This article describes our initial experience with building applications programs in a hybrid AI tool environment. Traditional AI systems developments have emphasized a single methodology, such as frames, rules or logic programming, as a methodology that is natural, efficient, and uniform. The applications we have developed suggest that natural-ness, efficiency and flexibility are all increased by trading uniformity for the power that is provided by a small set of appropriate programming and representation tools. The tools we use are based on five major AI methodologies: frame-based knowledge representation with inheritance, rule-based reasoning, LISP, interactive graphics, and active values.


R1 Revisited: Four Years in the Trenches

AI Magazine

In 1980, Digital Equipment Corporation began to use a rule-based system called R1 by some and XCON by others to configure VAX-11 computer systems. In the intervening years, R1's knowledge has increased substantially and its usefulness to Digital continues to grow. This article describes what is involved in extending R1's performance during the four year period.


Artificial Intelligence in Transition

AI Magazine

In the past fifteen years artificial intelligence has changed from being the preoccupation of a handful of scientists to a thriving enterprise that has captured the imagination of world leaders and ordinary citizens alike. While corporate and government officials organize new projects whose potential impact is widespread, to date few people have been more affected by the transition than those already in the field. I review here some aspects of this transition, and pose some issues that it raises for AI researchers, developers, and leaders.


On the Development of Commercial Expert Systems

AI Magazine

We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert system. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert systems. Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development.


Review of A Mathematical Theory of Evidence

AI Magazine

It may be argued that this, in principle, is a more realistic approach because it addresses, rather than finesses, the problem of incomplete information in the knowledge base. On the other hand, the Dempster-Shafer theory provides a basis-at least at present-for only a small subset of the rules of combination which are needed for inferencing in expert systems. In particular, the theory does not address the issue of chaining, nor does it come to grips with the fuzziness of probabilities and certainty factors. Thus, although the theory is certainly a step in the right direction, for it provides a framework for dealing with granular data, it does require a great deal of further development to become a broadly useful tool for the management of uncertainty in expert systems. Although not easy to understand, Shafer's book contains a wealth of significant results, and is a must for anyone who wants to do serious research on problems relating to the rules of combination of evidence in expert systems. Indeed, there is no doubt that, in the years to come, the Dempster-Shafer theory and its extensions will become an integral part of the theory of such systems and will certainly occupy an important place in knowledge engineering and related fields.


Applications Development Using a Hybrid Artificial Intelligence Development System

AI Magazine

This article describes our initial experience with building applications programs in a hybrid AI tool environment. Traditional AI systems developments have emphasized a single methodology, such as frames, rules or logic programming, as a methodology that is natural, efficient, and uniform. The applications we have developed suggest that natural-ness, efficiency and flexibility are all increased by trading uniformity for the power that is provided by a small set of appropriate programming and representation tools. The tools we use are based on five major AI methodologies: frame-based knowledge representation with inheritance, rule-based reasoning, LISP, interactive graphics, and active values. Object-oriented computing provides a principle for unifying these different methodologies within a single system.


Probability Concepts for an Expert System Used for Data Fusion

AI Magazine

Probability concepts for ruled-based expert systems are developed that are compatible with probability used in data fusion of imprecise information. Procedures for treating probabilistic evidence are presented, which include the effects of statistical dependence. Confidence limits are defined as being proportional to root-mean-square errors in estimates, and a method is outlined that allows the confidence limits in the probability estimate of the hypothesis to be expressed in terms of the confidence limits in the estimate of the evidence. Procedures are outlined for weighting and combining multiple reports that pertain to the same item of evidence. The illustrative examples apply to tactical data fusion, but the same probability procedures can be applied to other expert systems.


Artificial Intelligence Research at NASA Langley Research Center (Research in Progress)

AI Magazine

Research in the field of artificial intelligence is developing rapidly at the various NASA centers, including Langley research Center in Hampton, Virginia. AI studies at Langley involve research for application in aircraft flight management, remote space teleoperators and robots, and structural optimization.