If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The study of perception is divided among many established sciences: physiology, experimental psychology and machine intelligence; with several others making contributions. But each of the contributing sciences tends to have its own concepts, and ways of considering problems. Each -- to use T. S. Kuhn's term (1962) -- has its own'paradigm', within which its science is respectable. This can make cooperation difficult, as misunderstandings (and even distrust) can be generated by paradigm differences. This paper is a plea to consider perceptual phenomena from many points of view, and to consider whether a general paradigm for perception might be found.
This paper is concerned with two aspects of the'exact match' problem which is that of searching amongst a set of stored patterns to find those specified by a given partial description. We describe how to design simple contentaddressable memories, functioning in parallel, which can do this and which, in some sense, can generalise about the stored data. Secondly, we consider how certain graphical representations of data may be suitable for use in efficient serial search strategies. We indicate how such structures can be used in diagnosis when the availability or cost of tests to be applied cannot be determined in advance. The type of parallel system to be considered is to store descriptions of a set of patterns, and is then to be used to supplement an incomplete description of a newly presented pattern by matching it against those in store.
In this paper we compare three models of transformational grammar: the mathematical model of Ginsburg and Partee (1969) as applied by Salomaa (1971), the mathematical model of Peters and Ritchie (1971 and forthcoming), and the computer model of Friedman et al. (1971). All of these are, of course, based on the work of Chomsky as presented in Aspects of the Theory of Syntax (1965). We were led to this comparison by the observation that the computer model is weaker in three important ways: search depth is not unbounded, structures matching variables cannot be compared, and structures matching variables cannot be moved. All of these are important to the explanatory adequacy of transformational grammar. Both mathematical models allow the first, they each allow some form of the second, one of them allows the third.
This paper proposes a set of general conventions for representing naturallanguage (that is, 'semantic') information in many-sorted first-order predicate calculus. The purpose of this work is to provide a testing-ground for existing theorem-proving programs, and to suggest a method for using theorem-provers for question-answering and other kinds of information retrieval. The details of the notation are left to later sections. Several natural-language constructions are in a certain sense'higher-order'. 'm is more expensive than n' might be well expressed by The method, of course, is to re-express what used to be predicates as individuals, and to use a single application predicate.
The problem we consider in this paper is that of discovering formal rules which will enable us to decide when a question posed in English can be answered on the basis of one or more declarative English sentences. To illustrate how this may be done in very simple cases we give rules which translate certain declarative sentences and questions involving the quantifiers'some', 'every', 'any', and'no' into a modified first-order predicate calculus, and answer the questions by comparing their translated forms with those of the declaratives. We suggest that in order to capture the meanings of more complex sentences it will be necessary to go beyond the first-order predicate calculus, to a notation in which the scope of words other than quantifiers and negations is clearly indicated. We conclude by describing a notational form for connected sentences, which seems to be a natural extension of Chomsky's'deep structures'. In this paper we shall consider the problem of when an English sentence, or a series of sentences, provides enough information to answer a question, also posed in English.
We have attempted to discover formal rules for transcribing into musical notation the fugue subjects of the Well-Tempered Clavier, as this might be done by an amanuensis listening to a'deadpan' performance on the keyboard. In this endeavour two kinds of problem arise: what are the harmonic relations between the notes, and what are the metrical units into which they are grouped? The harmonic problem is that the number of keyboard semitones between two notes does not define-- their harmonic relation, and we further develop an earlier theory of such relations, arriving at an algorithm which assigns every fugue to the right key and correctly notates every accidental in its subject. The performance of a piece of music involves both the performer and the listener in a problem of interpretation. The performer must discern and express musical relationships which are not fully explicit in the musical score, and the listener must appreciate relationships which are not explicit in the performance.