If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL INTELLIGENCE John McCarthy Computer Science Department Stanford University Stanford, California 94305 Introduction In (McCarthy and Hayes 1969), we proposed dividing the artificial intelligence problem into two parts - an epistemological part and a heuristic part. This lecture further explains this division, explains some of the epistemological problems, and presents some new results and approaches. The epistemological part of Al studies what kinds of facts about the world are available to an observer with given Opportunities to observe, how these facts can be represented in the memory of a computer, and what rules permit legitimate conclusions to be drawn from these facts. It leaves aside the heuristic problems of how to search spaces of possibilities and how to match patterns. Considering epistemological problems separately has the following advantages: I. The same problems of what information is available to an observer and what conclusions ...
Circumscription formalizes such conjectural reasoning. McCarthy  proposed a program with common sense' that would represent what it knows (mainly) by sentences in a suitable logical language. It would decide what to do by deducing a conclusion that it should perform a certain act. Performing the act would create a new situation, and it would again decide what to do. This requires representing both knowledge about the particular situation and general common sense knowledge as sentences of logic.
Lectured in Philosophy at the Hebrew University In Jerusalem and became Associate Professor in 1957. Since 1957 he has also taught in the Department of History and Philosophy of Science. Joint author with Professor A. A. Fraenkel of "Foundations of Set Theory", to be published by the North-Holland Publishing Company in the series "Studies In Logic". Y. BAR-HILLEL SUMMARY "FOUR sources of inefficiencies in the process of literature searching are briefly described. An "Ideal" solution Is outlined as a frame of reference and its shortcomings discussed.
R. H. Richens was born in Penge, near London, in 1919. He read natural sciences at Cambridge and is now Assistant Director of the Commonwealth Bureau of Plant Breeding and Genetics at Cambridge. He has been a member of the Cambridge Language Research Group since its foundation. His principal research interests have been the taxonomy and history of the elm, the history of Soviet genetics, and machine translation. Everything symbolized by a set of symbols constitutes the domain of symbolization of the set.
North-Holland Advances in Artificial Intelligence, T. O'Shea (Ed.) Elsevier Science Publishers By. (North-I lolland) 0 ECCAI, 1985 367 THE UBIQUITOUS DIALECTIC Edwina L. Rissland Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 In this paper, we discuss the central role played by examples in reasoning in various fields including mathematics, law, linguistics, and computer science. In particular, we consider the dialectic between proposing a concept, conjecture, or proposition and testing and refining it with examples. We provide several examples of this ubiquitous process. In mathematics, examples can be said to be as important to understanding as the traditionally exalted definitions, theorems, and proofs (Rissland 1978). In fact, some mathematical areas developed in response to troublesome counterexamples like modern real function theory which has been called'the branch of mathematics which deals with counterexamples" (Munroe 1953).
First we examine the vocabulary and the syntax. Then we go through the various objects appearing in this context: pieces, areas, sides, moves, situations, knowledge about chess, people, time, events, judgements, changes, explanations, goals, consequences and reasons. We also consider how the information about an object is put together. Finally we survey the consequences of this analysis for the realization of chess programs. I] presented a systematic account of the facts of thought and of the methods to express them.
How can we get a computer to understand natural language? Our view of the problem has progressed over the years to a point where an answer to that question today would look quite different from one given ten or even five years ago. Originally, researchers felt that the most relevant issue was syntax. Later, most people agreed that semantics was the most relevant field of study (although few would have agreed on what semantics was). Five years ago, or so, our research was concentrated on finding an adequate meaning representation for sentences.
That is all, except that there follows a list of initial statements (axioms) that involve the words "point', "line" and "plane", and from which other statements involving those undefined words can now be deduced by logic alone. This permits geometry to be taught to a blind man and even to a computer!" Leaving aside the attitude implicit in Kac & Ulam's use of the word'even' in the phrase even to a computer', it has become clear that programs to prove theorems in first order axiomatic theories such as geometry, working in this blind' way, are unlikely to be successful. In parentheses, one might remark that mathematicians, however they express their proofs, usually do not construct them by working entirely within the formal syntactic system (i.e. Premises: To prove: MBC M is the midpoint of segment BC BD is the perpendicular from B to AM CE is the perpendicular from C to AM. segment BD segment CE. an appropriate diagram could be that of Figure 1(a).
In this paper we compare three models of transformational grammar: the mathematical model of Ginsburg and Partee (1969) as applied by Salomaa (1971), the mathematical model of Peters and Ritchie (1971 and forthcoming), and the computer model of Friedman et al. (1971). All of these are, of course, based on the work of Chomsky as presented in Aspects of the Theory of Syntax (1965). We were led to this comparison by the observation that the computer model is weaker in three important ways: search depth is not unbounded, structures matching variables cannot be compared, and structures matching variables cannot be moved. All of these are important to the explanatory adequacy of transformational grammar. Both mathematical models allow the first, they each allow some form of the second, one of them allows the third.
And-or graphs and theorem-proving graphs determine the same kind of search space and differ only in the direction of search: from axioms to goals, in the case of theorem-proving graphs, and in the opposite direction, from goals to axioms, in the case of and-or graphs. Bidirectional search strategies combine both directions of search. We investigate the construction of a single general algorithm which covers unidirectional search both for and-or graphs and for theorem-proving graphs, bidirectional search for path-finding problems and search for a simplest solution as well as search for any solution. We obtain a general theory of completeness which applies to search spaces with infinite or-branching. In the case of search for any solution, we argue against the application of strategies designed for finding simplest solutions, but argue for assigning a major role in guiding the search to the use of symbol complexity (the number of symbol occurrences in a derivation).