If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Advocates and critics of AI have long engaged in a debate that has generated a great deal of heat but little light. Whatever the merits of specific contributions to this ongoing debate, the fact that it continues points to the need for a reflective examination of the foundations of AI by its active practitioners. Following the lead of Earl MacCormac, we hope to advance such a reflective examination by considering questions of metaphor in science and the computational metaphor in AI. Specifically, we address three issues: the role of metaphor in science and AI, an examination of the computational metaphor, and an introduction to the possibility and potential value of using alternative metaphors as a foundation for AI theory.
This article analyzes an attempt to use computing technology, including AI, to improve the combat readiness of a U.S. Navy aircraft carrier. The method of introducing new technology, as well as the reaction of the organization to the use of the technology, is examined to discern the reasons for the rejection by the carrier's personnel of a technically sophisticated attempt to increase mission capability. This effort to make advanced computing technology, such as expert systems, an integral part of the organizational environment and, thereby, to significantly alter traditional decision-making methods failed for two reasons: (1) the innovation of having users, as opposed to the navy research and development bureaucracy, perform the development function was in conflict with navy operational requirements and routines and (2) the technology itself was either inappropriate or perceived by operational experts to be inappropriate for the tasks of the organization. Finally, this article suggests those obstacles that must be overcome to successfully introduce state-of-the-art computing technology into any organization.
This article reports on experiments performed using a black-box simulation of a spacecraft. The goal of this research is to learn to control the attitude of an orbiting satellite. The space-craft must be able to operate with minimal human supervision. To this end, we are investigating the possibility of using adaptive controllers for such tasks. Laboratory tests have suggested that rule-based methods can be more robust than systems developed using traditional control theory. The BOXES learning system, which has already met with success in simulated laboratory tasks, is an effective design framework for this new exercise.
A survey of 150 papers from the Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90) shows that AI research follows two methodologies, each incomplete with respect to the goals of designing and analyzing AI systems but with complementary strengths. I propose a mixed methodology and illustrate it with examples from the proceedings.
Expertise comprises experience. In solving a new problem, we rely on past episodes. We need to remember what plans succeed and what plans fail. We need to know how to modify an old plan to fit a new situation. Case-based reasoning is a general paradigm for reasoning from experience. It assumes a memory model for representing, indexing, and organizing past cases and a process model for retrieving and modifying old cases and assimilating new ones. Case-based reasoning provides a scientific cognitive model. The research issues for case-based reasoning include the representation of episodic knowledge, memory organization, indexing, case modification, and learning. In addition, computer implementations of case-based reasoning address many of the technological shortcomings of standard rule-based expert systems. These engineering concerns include knowledge acquisition and robustness. In this article, I review the history of case-based reasoning, including research conducted at the Yale AI Project and elsewhere.