If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Commentators on AI converge on two goals they believe define the field: (1) to better understand the mind by specifying computational models and (2) to construct computer systems that perform actions traditionally regarded as mental. We should recognize that AI has a third, hidden, more basic aim; that the first two goals are special cases of the third; and that the actual technical substance of AI concerns only this more basic aim. This third aim is to establish new computation-based representational media, media in which human intellect can come to express itself with different clarity and force. This article articulates this proposal by showing how the intellectual activity we label AI can be likened in revealing ways to each of five familiar technologies. AI is not about building artificial intelligences, nor is it about understanding the human mind or any other kind of mind.
We explain how the scientific study of biological systems offers a complementary approach to the more formal analytic methods favored by roboticists; such study is also relevant to a number of classical problems addressed by the AI field. We offer an example of the scientific approach that is based on a selection of our experiments and empirically driven theoretical work on human haptic (tactual) object processing; the nature and role of active manual exploration is of particular concern. We further suggest how this program with humans can be modified and extended to guide the development of high-level manual exploration strategies for robots equipped with a haptic perceptual system. Consider the range of work that is being carried out on artificially and naturally intelligent systems and allow us to describe its domain in the broadest sense by including not just thinking but sensing and perceiving, thinking, and motor actions on the environment. We argue that the scientific study of biological systems offers an approach to the development of sensor-based robots that's complementary to the more formal analytic methods favored by roboticists.
It is impossible for a travel agent to keep track of all the offered tour packages. Traditional database-driven applications, as used by most of the tour operators, are not sufficient enough to implement a sales process with consultation on the World Wide Web. The last-minute travel application presented here uses case-based reasoning to bridge this gap and simulate the sales assistance of a human travel agent. A case retrieval net, as an internal data structure, proved to be efficient in handling the large amount of data. A usual tour package contains the flight to the destination and back, transfers from the airport to the hotel and back, board, and lodging.
This ease has spurred an increasing interest from professionals, the general public, and consequently politicians to make publicly available the tremendous wealth of information kept in museums, archives, and libraries--the so-called memory organizations. Quite naturally, their development has focused on presentation, such as web sites and interfaces to their local databases. Now with more and more information becoming available, there is an increasing demand for targeted global search, comparative studies, data transfer, and data migration between heterogeneous sources of cultural contents. The reality of semantic interoperability is getting frustrating. In the cultural area alone, dozens of standard and hundreds of proprietary metadata and data structures exist as well as hundreds of terminology systems.
Inherent batch-to-batch variability, aging, and contamination are major factors contributing to variability in oilfield cement-slurry performance. Of particular concern are problems encountered when a slurry is formulated with one cement sample and used with a batch having different properties. Such variability imposes a heavy burden on performance testing and is often a major factor in operational failure. We describe methods that allow the identification, characterization, and prediction of the variability of oilfield cements. Our approach involves predicting cement compositions, particlesize distributions, and thickening-time curves from the diffuse reflectance infrared Fourier transform spectrum of neat cement powders.
We report on the spring 1992 symposium on diagrammatic representations in reasoning and problem solving sponsored by the American Association for Artificial Intelligence. The symposium brought together psychologists, computer scientists, and philosophers to discuss a range of issues covering both externally represented diagrams and mental images and both psychologyand AIrelated issues. In this article, we develop a framework for thinking about the issues that were the focus of the symposium as well as report on the discussions that took place. We anticipate that traditional symbolic representations will increasingly be combined with iconic representations in future AI research and technology and that this symposium is simply the first of many that will be devoted to this topic. The emphasis of this symposium was diagrammatic (or pictorial) representations in problem solving and reasoning.
In this article, we develop a framework for comparing ontologies and place a number of the more prominent ontologies into it. We have selected 10 specific projects for this study, including general ontologies, domain-specific ones, and one knowledge representation system. The comparison framework includes general characteristics, such as the purpose of an ontology, its coverage (general or domain specific), its size, and the formalism used. It also includes the design process used in creating an ontology and the methods used to evaluate it. Characteristics that describe the content of an ontology include taxonomic organization, types of concept covered, top-level divisions, internal structure of concepts, representation of part-whole relations, and the presence and nature of additional axioms.
Example from the LOOM WORDNet Knowledge Base. At the beginning, we assumed that the hyponymy relation could simply be mapped onto the subsumption relation and that the synset notion could be mapped into the notion of concept. Both subsumption and concept have the usual description logic semantics (Woods and Schmolze 1992). LOOM WORDNET knowledge base are reported in table 1. Fig-ORDNET's noun top Under Territorial_-Dominion, we find Macao and Palestine together with Trust_Territory. The Trust_Territory synset, defined as "a dependent country, administered by a country under the supervision of United Nations," denotes a general kind of country rather than a specific country such as Macao or Palestine.
The emerging Semantic Web focuses on bringing knowledge representationlike capabilities to Web applications in a Web-friendly way. The ability to put knowledge on the Web, share it, and reuse it through standard Web mechanisms provides new and interesting challenges to artificial intelligence. In this paper, I explore the similarities and differences between the Semantic Web and traditional AI knowledge representation systems, and see if I can validate the analogy "The Semantic Web is to KR as the Web is to hypertext." The first comes from a tutorial on expert systems written by Robert Engelmore with Edward Feigenbaum in 1993. Because of the importance of knowledge in expert systems and because the current knowledge acquisition method is slow and tedious, much of the future of expert systems depends on breaking the knowledge acquisition bottleneck and in codifying and representing a large knowledge infrastructure.
We focus on two paradigms: logics for cognitive models of agency, and logics used to model the strategic structure of a multiagent system. Logic can be a powerful tool for reasoning about multiagent systems. First of all, logics provide a language in which to specify properties -- properties of an agent, of other agents, and of the environment. Ideally, such a language then also provides a means to implement an agent or a multiagent system, either by somehow executing the specification, or by transforming the specification into some computational form. Second, given that such properties are expressed as logical formulas that form part of some inference system, they can be used to deduce other properties.