If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Many large scale companies use knowledge-based systems (KBS) to support their decision making processes. The quality of the decisions made depend on the quality of the underlying knowledge. It has been stated many times that verification techniques can be used to improve decision making and the quality of the knowledge rules in a knowledge based system. Furthermore, verification is seen as one of the key issues in system certification. After a short introduction to the current state of the art of knowledge verification this paper describes a verification technique used in a commercial development environment for knowledge intensive applications: VALENS. We will describe the experiences with VALENS in some recently finished experiments. Based on these results and an overview of the literature we will discuss the discrepancies between verification in practice and verification in theoretical / scientific situations. This leads us to an overview of the requirements for successful verification in practice. Obeying these requirements will increase the return on investment for knowledge based systems.
The pros and cons of formal methods are the subject of many discussions in Artificial Intelligence (AI). Here, the authors describe a formal method that aims at system refinement based on the results of a test case validation technology for rule-based systems. This technique provides sufficient information to estimate the validity of each single rule. Validity in this context is estimated by evaluating the test cases that used the considered rule. The objective is to overcome the particular invalidities that are revealed by the validation process.
For rule validation there are no second and higher order refinement heuristics yet. This paper presents an example for second order refinement heuristics and introduces generic higher order refinement heuristics. It is proposed that rule refinement should be performed case-based, i.e. the whole spectrum from first order to higher order refinement heuristics can be processed instead of first order ones only. This approach extends the classical SEEK2 framework. Moreover, the generalization - specialization dichotomy is extended by defining context refinement as the third refinement class. It is shown that the system SEEK2 registers context refinement problems inadequate and that first order heuristics are suboptimal.
The strategy for memory-bound A* search adopted by MA* (Chakrabarti et al. 1989), SMA* (Russell 1992), and SMAG* (Kaindl and Khorsand 1994) is to prune the leastpromising nodes from the open list when memory is full, in order to make room for insertion of new nodes. To preserve search information from pruned nodes, heuristic estimates are backed-up through the search graph. We show that even when the heuristic function is consistent, backed-up heuristic estimates become inconsistent. Thus, it is always possible to find a better path to a node that has been previously expanded. We describe how to modify a memory-bounded A* graph-search algorithm so that it handles the discovery of a better path to a previously expanded node in a more efficient way. We demonstrate its improved performance on a challenging graph-search problem in computational biology.
WordNet (Miller 1995) can be viewed as a rich source of world knowledge structured based on lexico-semantic relations among concepts represented as sets of words that have same meaning (synsets). Each synset has a gloss or a small textual definition and few examples attached to it. We aim at transforming WordNet glosses into a computational representation that enables reasoning mechanisms. In this paper we address the issue of bracketing coordinated compound nouns. The logic form that we use is first order logic and includes syntactic information in the form of positional arguments.
Comparisons between primal and dual approaches have recently been extensively studied and evaluated from a theoretical standpoint based on the amount of pruning achieved by each of these when applied to non-binary constraint satisfaction problems. Enforcing arc consistency on the dual encoding has been shown to strictly dominate enforcing GAC on the primal encoding (Stergiou & Walsh 1999). More recently, extensions to dual arc consistency have extended these results to dual encodings that are based on the construction of compact constraint coverings, that retain the completeness of the encodings, while using a fraction of the space. In this paper we present a complete theoretical evaluation of these different consistency techniques and also demonstrate how arbitrarily high levels of consistency can be achieved efficiently using them.
This paper describes two algorithms for determining the satisfiability of Boolean conjunctive normal form expressions limited to two literals per clause (2-SAT) extending the classic effort of Aspvall, Plass, and Tarjan. The first algorithm differs from the original in that satisfiability is determined upon the presentation of each clause rather than the entire clause set. This online algorithm experimentally exhibits average run-time linear to the number of variables.
In this paper we propose a method for the prediction of learning performance in Support Vector Machines based on a novel definition of intra-and inter-class similarity. Our measure of category similarity can be easily estimated from the learning data. In the second part of the paper we provide experimental evidence to support the effectiveness of this measure.