Industry
Knowledge Programming in Loops
Stefik, Mark, Bobrow, Daniel G., Mittal, Sanjay
Early this year fifty people took an experimental course at Xerox PARC on knowledge programming in Loops. During the course, they extended and debugged small knowledge systems in a simulated economics domain called Truckin. Everyone learned how to use the environment Loops, formulated the knowledge for their own program, and represented it in Loops. At the end of the course a knowledge competition was run so that the strategies used in the different systems could be compared. The punchline to this story is that almost everyone learned enough about Loops to complete a small knowledge system in only three days. Although one must exercise caution in extrapolating from small experiments, the results suggest that there is substantial power in integrating multiple programming paradigms.
The Distributed Vehicle Monitoring Testbed: A Tool for Investigating Distributed Problem Solving Networks
Lesser, Victor R., Corkill, Daniel G.
Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1.) it implements a novel generic architecture for distributed problems solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2.) it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and focus-of attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving. Node configuration and communication channel characteristics can also be independently varied in the simulated network.
Artificial Intelligence Research at the Artificial Intelligence Laboratory, Massachusetts Institute of Technology
The primary goal of the Artificial Intelligence Laboratory is to understand how computers can be made to exhibit intelligence. Two corollary goals are to make computers more useful and to understand certain aspects of human intelligence. Current research includes work on computer robotics and vision, expert systems, learning and commonsense reasoning, natural language understanding, and computer architecture.
On Evaluating Artificial Intelligence Systems for Medical Diagnosis
Among the difficulties in evaluating AI-type medical diagnosis systems are: the intermediate conclusions of the AI system need to be looked at in addition to the "final " answer; the "superhuman human" fallacy must be guarded against; and methods for estimating how the approach will scale upwards to larger domains are needed. We propose to measure both the accuracy of diagnosis and the structure of reasoning, the latter with a view to gauging how well the system will scale up.
Artificial Intelligence: Some Legal Approaches and Implications
Various groups of ascertainable individuals have been granted the status of "persons" under American law, while that status has been denied to other groups. This article examines various analogies that might be drawn by courts in deciding whether to extend "person" status to intelligent machines, and the limitations that might be placed upon such recognition. As an alternative analysis, this article questions the legal status of various human/machine interfaces, and notes the difficulty in establishing an absolute point beyond which legal recognition will not extend.
Methodological Simplicity in Expert System Construction: The Case of Judgments and Reasoned Assumptions
Probabilistic rules and their variants have recently supported several successful applications of expert systems, in spite of the difficulty of committing informants to particular conditional probabilities or ";certainty factors"; and in spite of the experimentally observed insensitivity of system performance to perturbations of the chosen values. Here we survey recent developments concerning reasoned assumptions which offer hope for avoiding the practical elusiveness of probabilistic rules while retaining theoretical power, for basing systems on the information unhesitatingly gained from expert informants, and reconstructing the entailed degrees of belief later.
On the Relationship Between Strong and Weak Problem Solvers
Ernst, George W., Banerji, Ranan B.
The basic thesis put forth in this article is that a problem solver is essentially an interpreter that carries out computations implicit in the problem formulation. A good problem formulation gives rise to what is conventionally called a strong problem solver; poor formulations correspond to weak problem solvers. Knowledge-based systems are discussed in the context of this thesis. We also make observations about the relationship between search strategy and problem formulation.
The Banishment of Paper-Work
It may come as a surprise to some to be told that the modern digital computer is really quite old in concept, and the year 1984 will be celebrated as the 150th anniversary of the invention of the first computer the Analytical Engine of the Englishman Charles Babbage. One hundred and fifty years is really quite a long period of time in terms of modern science and industry and, at first glance, it seems unduly long for new concept to come into full fruition. Unfortunately, Charles Babbage was ahead of his time, and it took one hundred years of technical development, the impetus of the second World War and the perception of John Von Neumann to bring the computer into being. Now twenty years later and with several generations of computer behind us, we are in a position to make a somewhat more meaningful prognosis than appeared possible in, say 1948. We can only hope that we will not be as far off actuality as we believe George Orwell to be, or as far off in our time scale as were Charles Babbage and his almost equally famous interpreter, Lady Lovelace.
Artificial Intelligence Research at the Artificial Intelligence Laboratory, Massachusetts Institute of Technology
The primary goal of the Artificial Intelligence Laboratory is to understand how computers can be made to exhibit intelligence. Two corollary goals are to make computers more useful and to understand certain aspects of human intelligence. Current research includes work on computer robotics and vision, expert systems, learning and commonsense reasoning, natural language understanding, and computer architecture.
Methodological Simplicity in Expert System Construction: The Case of Judgments and Reasoned Assumptions
Editors' Note: Many expert systems require some means criticisms of this approach from those steeped in the practical of handling heuristic rules whose conclusions are less than certain issues of constructing large rule-based expert systems. Abstract the expert system draws inferences in solving different problems. Doyle's paper argues that it is difficult for a human expert "certainty factors," and in spite of the experimentally observed insensitivity of system performance to perturbations of the chosen values Recent successes of "expert systems" stem from much Research Projects Agency (DOD), ARPA Order No. 3597, monitored In the following, we explain the modified approach together with its practical and theoretical attractions. The client's income bracket is 50%, can be found (Minsky, 1975; Shortliffe & Buchanan, 1975; and 2. The client carefully studies market trends, Duda, Hart, & Nilsson, 1976; Szolovits, 1978; Szolovits & THEN: 3. There is evidence (0.8) that the investment Pauker, 1978). Reasoned Assumptions (from Davis, 1979) and would use the rule to draw conclusions whose "certainty factors" depend on the observed certainty Although our approach usually approximates that of Bayesian probabilities, accommodates representational systems based on "frames" namely as subjective degrees of belief.