Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
EXPRS: A Prototype Expert System Using Prolog for Data Fusion
The prototype system is written in Prolog, a language that has proved to be very powerful and easy to use for problem /rule development. The resulting prototype system (called EXPRS-Expert Prolog System) uses English-like rule constructs of Prolog code. This approach enables the system to generate answers automatically to "why" a ruled fired, and "how" that rule fired. In addition, a rule clause construct is provided which allows direct access to Prolog code routines.
Artificial Intelligence Research at Vanderbilt University (Research in Progress)
At Vanderbilt University we are exploring the use of expert systems in a broad range of application areas. Programming is in Franz Lisp on a VAX 11/790, UCI LISP on a DEC-10, and IQ LISP on an IBM XT. Currently, personnel from four schools in the University are participating. Listed are brief descriptions of current projects.
Expert Systems Without Computers, or Theory and Trust in Artificial Intelligence
Knowledge engineers qualified to build expert systems are currently in short supply. The production of useful and trustworthy expert systems can be significantly increased by pursing the idea of articulate apprenticeship independent of computer implementations. Making theoretical progress in artificial intelligence should also help.
Experience with INTELLECT: Artificial Intelligence Technology Transfer
AI technology transfer Is the diffusion of AI research techniques into commercial products. I have been involved in this process since 1975, when the Artificial Intelligence Corporation began to develop ROBOT, the prototype of INTELLECT, a commercially viable natural language interface to data base systems which has been on the market since 1981. In this article, I will discuss AI technology transfer with particular reference to my experiences with the commercialization of INTELLECT. I will begin with the historical perspective of where the field of AI came from, where it is now, and where it is going. Next, I will describe my interpretation of the present market structure for AI products and some specific marketing perspectives. I will then briefly describe the product INTELLECT and its capabilities as an example of a state-of-the-art commercial system. Next, I will describe some of the experiences, which I think are typical, that my company has encountered in commercialize their systems.
A Perspective on Automatic Programming
Most work in automatic programming has focused primarily on the roles of deduction and programming knowledge. However, the role played by knowledge of the task domain seems to be at least as important, both for the usability of an automatic programming system and for the feasibility of building one which works on non-trivial problems. This perspective has evolved during the course of a variety of studies over the last several years, including detailed examination of existing software for a particular domain (quantitative interpretation of oil well logs) and the implementation of an experimental automatic programming system for that domain. The importance of domain knowledge has two important implications: a primary goal of automatic programming research should be to characterize the programming process for specific domains; and a crucial issue to be addressed in these characterizations is the interaction of domain and programming knowledge during program synthesis.
Artificial Intelligence Research at the Information Sciences Institute (Research in Progress)
Founded in 1972 to develop and disseminate new ideas in computer science, the Information Sciences Institute (ISI) is an off-campus research center of the University of Southern California, with a combined research and support staff of over one hundred. The Institute engages in a broad set of research and application-oriented projects in the computer sciences. The Institute AI research focuses on program synthesis user interfaces, programming environments, natural language, and expert systems. AI researchers are supported by ten personal Lisp workstations, several VAXs, two TOPS-20 systems, and a magnificent view of Marina del Rey.
What Should Artificial Intelligence Want from the Supercomputers?
While some proposals for supercomputers increase the powers of existing machines like CDC and Cray supercomputers, others suggest radical changes of architecture to speed up non-traditional operations such as logical inference in PROLOG, recognition/ action in production systems, or message passing. We examine the case of parallel PROLOG to identify several related computations which subsume those of parallel PROLOG, but which have much wider interest, and which may have roughly the same difficulty of mechanization. Similar considerations apply to some other proposed architectures as well, raising the possibility that current efforts may be limiting their aims unnecessarily.
Research at Jet Propulsion Laboratory
AI research at JPL started in 1972 when design and construction of experimental "Mars Rover" began. Early in that effort, it was recognized that rover planning capabilities were inadequate. Research in planning was begun in 1975, and work on a succession of AI expert systems of steadily increasing power has continued to the present. Within the group, we have concentrated our efforts on expert systems, although work on vision and robotics has continued in a separate organizations, with which we have maintained informal contacts.
GLISP: A Lisp-Based Programming System with Data Abstraction
GLISP programs are shorter and more readable than equivalent LISP programs. The object code produced by GLISP is optimized, making it about as efficient as handwritten Lisp. An integrated programming environment is provided, including automatic incremental compilation, interpretive programming features, and an intelligent display-based inspector/editor for data and data-type descriptions. GLISP code is relatively portable; the compiler and data inspector are implemented for most major dialects of LISP and are available free or at nominal cost.
The Distributed Vehicle Monitoring Testbed: A Tool for Investigating Distributed Problem Solving Networks
Lesser, Victor R., Corkill, Daniel G.
Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1.) it implements a novel generic architecture for distributed problems solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2.) it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and focus-of attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving. Node configuration and communication channel characteristics can also be independently varied in the simulated network.