If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It has been used to prove properties safety-critical specifications for systems of nontrivial size, without being adequate for systems of full industrial scale (see, for example Atkinson and Cunningham 11991]). While the full battery of mechanised deduction methods with sorting, theory unification etc., would undoubtedly have provided further enhancement for this system, there seemed to be an underlying need for a "divide and rule" approach which will reduce deep problems into small shallow parts.
The Logic of Epistemic Inconsistency, LEI for short, was conceived to formalize and to enable reasoning in the presence of a special type of inconsistency, very common, unavoidable and with undisputable relevance for both Artificial Intelligence and Philosophy of Science. The notion of epistemic inconsistency refers to contradiction occurring in descriptions of (or an agent's belief about) a given situation (or object) relying not on incongruency the "state of affairs" itself, which would make of it an onthological, "real" contradiction, but on incompleteness (or lack of perfection) of the (agent's) knowledge about it. An instance of this phenomenon, of particular interest to A.I., occurs in connection to nonmonotonic reasoning. As a matter of fact, philosophers of science have related this kind of inconsistency *This work was partially sponsored by CNPq.
Anthony t J. Bonner* and Michael Kifer 1 Introduction This paper presents AI applications of the recently proposed Transaction Logic (abbr., 7-7z) . Transaction Logic is a novel formalism that accounts in a clean and declarative fashion for the phenomenon of updating first-order knowledge bases, most notably, databases and logic programs. Transaction Logic has a natural model theory and a sound-and-complete proof theory. Unlike many other logics, Tn allows users to program transactions that modify the state of a knowledge base. This is possible because, like classical logic, Tn has a "Horn" version which has both a procedural and a declarative semantics, as well as an efficient SLD-style proof procedure.
A new tableau-based calculus for first-order intuitionistic logic is proposed. The calculus is obtained from the tableau calculus for classical logic by extending its rules by A-terms. A-terms are seen as compact representation of natural deduction proofs. The benefits from that approach are twofold. First, proof search methods known for classical logic can be adopted: Run-time-Skolemization and unification.
The problem here is to eliminate interferent solutions that might be derived from a morphological analysis. We start by using a Marlmv chain with one long sequence of transitions. In this model the states are the morphological features and a sequence corresponds to the transition of a word fore from one feature to another. After having observed an inadequacy of this model, one will explore a nonstafionary hidden Markov Wocess. Among the main advantages of this latter model we have the po bility to assign a type to a text given some training samples. Therefore, a recognition of "style" or a aeatim of a new one might be developed.
Recently, there have been a number of efforts to cast natural language understanding in a probabilistic framework. The argument that there is a probabilistic element in natural language interpretation is essentially the following: Most inferences or interpretation decisions cannot be made based on logical criteria alone. Rather, evidence of varying strengths needs to be combined, and probability theory offers a principled basis for doing so. For example, consider the following story1: (1) John got his suitcase. He went to the airport. The interpretation that John is a terrorist intent on blowing up an airplane, say, is not favored by human readers, and is probably not even considered by them, although structurally it is identical to *This work was supported by Nation Science Foundation grant IRI-9123336.
The output of handwritten word recognizers tends to be very noisy due to factors such as variable handwriting styles, distortions in the image data, etc. In order to compensate for this behaviour, several choices of the word recognizer are initially considered but eventually reduced to a single choice based on constraints posed by the particular domain. In the case of handwritten sentence/phrase recognition, linguistic constraints may be applied in order to improve the results of the word recognizer. Linguistic constraints can be applied as (i) a purely post-processing operation or (ii) in a feedback loop to the word recognizer. Both methods are based on syntactic categories (tags) associated with words. The first is a purely statistical method, the second is a hybrid method which combines higherlevel syntactic information (hypertags) with statistical information regarding transitions between hypertags. We show the utility of both these approaches in the problem of handwritten sentence/phrase recognition.
Tile representation of documents and queries as vectors in space is a well-known information retrieval paradigm (Salton and McGill, 1983). This paper suggests that the context or topic at a given point in a text can also be represented as a vector. A procedure for computing context vectors is introduced and applied to disambiguating ten English words with success rates between 89% and 95%. The structure of context space is analyzed. The paper argues that vectors with their potential for gradedness may be superior for some purposes to other representational schemes. Recently, there has been a lot of interest in measures of semantic relatedness such as mutual information (Church and Hanks, 1989).
Words are represented by the relative frequency distributions of contexts in which they appear, and relative entropy is used to measure the dissimilarity of those distributions. Clusters are represented by "typical" context distributions averaged from the given words according to their probabilities of cluster membership, and in many cases can be thought of as encoding coarse sense distinctions. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical "soft" clustering of the data. Motivation Methods for automatically classifying words according to their contexts of use have both scientific and practical interest.