If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This issue of the AI Magazine initiates a new and artistic efforts can have a real effect on our and (we hope) regular feature, Reviews of Books. Before presenting our first book review, a few comments Visions of applications of computer technology can about the aims of this feature are in order. However, we are general public. For the reasons outlined above as particularly interested in reviewing publications well as others, review and discussion of popular that attempt to provide tutorial and other forms treatments of work in AI are a useful adjunct to of summary discussions of broad areas of artificial the standard sorts of review to be included in this intelligence, publications that examine existing research column. We extend an invitation to anyone interested since one goal of the AI Magazine is to provide a in submitting a review.
Machine learning has always been an integral part of artificial intelligence, and its methodology has evolved in concert with the major concerns of the field. In response to the difficulties of encoding ever-increasing volumes of knowledge in modern AI systems, many researchers have recently turned their attention to machine learning as a means to overcome the knowledge acquisition bottleneck. This article presents a taxonomic analysis of machine learning organized primarily by learning strategies and secondarily by knowledge representation and application areas. A historical survey outlining the development of various approaches to machine learning is presented from early neural networks to present knowledge-intensive techniques.
Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1.) it implements a novel generic architecture for distributed problems solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2.) it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and focus-of attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving. Node configuration and communication channel characteristics can also be independently varied in the simulated network.
The primary goal of the Artificial Intelligence Laboratory is to understand how computers can be made to exhibit intelligence. Two corollary goals are to make computers more useful and to understand certain aspects of human intelligence. Current research includes work on computer robotics and vision, expert systems, learning and commonsense reasoning, natural language understanding, and computer architecture.
It may come as a surprise to some to be told that the modern digital computer is really quite old in concept, and the year 1984 will be celebrated as the 150th anniversary of the invention of the first computer the Analytical Engine of the Englishman Charles Babbage. One hundred and fifty years is really quite a long period of time in terms of modern science and industry and, at first glance, it seems unduly long for new concept to come into full fruition. Unfortunately, Charles Babbage was ahead of his time, and it took one hundred years of technical development, the impetus of the second World War and the perception of John Von Neumann to bring the computer into being. We can only hope that we will not be as far off actuality as we believe George Orwell to be, or as far off in our time scale as were Charles Babbage and his almost equally famous interpreter, Lady Lovelace.
Among the difficulties in evaluating AI-type medical diagnosis systems are: the intermediate conclusions of the AI system need to be looked at in addition to the "final " answer; the "superhuman human" fallacy must be guarded against; and methods for estimating how the approach will scale upwards to larger domains are needed. We propose to measure both the accuracy of diagnosis and the structure of reasoning, the latter with a view to gauging how well the system will scale up.
Various groups of ascertainable individuals have been granted the status of "persons" under American law, while that status has been denied to other groups. This article examines various analogies that might be drawn by courts in deciding whether to extend "person" status to intelligent machines, and the limitations that might be placed upon such recognition. As an alternative analysis, this article questions the legal status of various human/machine interfaces, and notes the difficulty in establishing an absolute point beyond which legal recognition will not extend.
Probabilistic rules and their variants have recently supported several successful applications of expert systems, in spite of the difficulty of committing informants to particular conditional probabilities or ";certainty factors"; and in spite of the experimentally observed insensitivity of system performance to perturbations of the chosen values. Here we survey recent developments concerning reasoned assumptions which offer hope for avoiding the practical elusiveness of probabilistic rules while retaining theoretical power, for basing systems on the information unhesitatingly gained from expert informants, and reconstructing the entailed degrees of belief later.