Results


Understanding natural language

Classics

This paper describes a computer system for understanding English. It is based on the belief that in modeling language understanding, we must deal in an integrated way with all of the aspects of language--syntax, semantics, and inference. It enters into a dialog with a person, responding to English sentences with actions and English replies, asking for clarification when its heuristic programs cannot understand a sentence through the use of syntactic, semantic, contextual, and physical knowledge. By developing special procedural representations for syntax, semantics, and inference, we gain flexibility and power.


Analysis of curved line drawings using context and global information

Classics

We describe the analysis of visual scenes consisting of black on white drawings formed with curved lines, depicting familiar objects and forms: houses, trees, persons, and so on; for instance, drawings found in coloring books. The analysis of these line drawings is an instance of'the context problem', which can be stated as'given that a set (a scene) is formed by components that locally (by their shape) are ambiguous, because each shape allows a component to have one of several possible values (a circle can be sun, ball, eye, hole) or meanings, can we make use of context information stated in the form of models, in order to single out for each component a value in such manner that the whole set (scene) is consistent or makes global sense?' This paper proposes a way to solve'the context problem' in the paradigm of coloring book drawings. The problem we are trying to solve is the Context Problem, which can be stated in general words as'given that a set (a scene) is formed by components that locally (by their shape) are ambiguous, because they can have one of several possible values (a circle sun, ball, eye, hole) or meanings, can we make use of By analyzing each component, we come to several possible interpretations of such component, and further disambiguation is possible only by using global information (information derived from several components, or by the interconnection or interrelation between two or more components), under the assumption that the scene as a whole'makes global sense' or is'consistent'.


Automatic translation of languages since 1960: A linguist's view

Classics

Language was considered just a "bunch of words" and the primary task for early machine translation (MT) was to build machines large enough to hold all the words necessary in the translation process. These means included the printing out of the several possible solutions of ambiguous text segments to let the reader decide for himself the correct meaning, printing out the ambiguous source language text, and other temporary expedients. Particularly one must understand the rules under which such a complex system as human language operates and how the mechanism of this operation can be simulated by automatic means, i.e., without any human intervention at all. The second problem, the simulation of human language behavior by automatic means, is almost impossible to achieve, since language is an open and dynamic system in constant change and because the operation of the system is not yet completely understood.


Natural language question-answering systems: 1969

Classics

In the meantime, Chomsky (1965) devised a paradigm for linguistic analysis that includes syntactic, semantic, and phonological components to account for the generation of natural language statements. This theory can be interpreted to imply that the meaning of a sentence can be represented as a semantically interpreted deep structure--i.e, From computer science's preoccupation with formal programming languages and compilers, there emerged another paradigm. The adoption and combination of these two new paradigms have resulted in a vigorous new generation of language processing systems characterized by sophisticated linguistic and logical processing of well-defined formal data structures. These included a social-conversation machine, systems that translated from English into limited logical calculi, and programs that attempted to answer questions from English text.


Studies in the completeness and efficiency of theorem-proving by resolution

Classics

Inference systems Τ and search strategies E for T are distinguished from proof procedures β = (T,E) The completeness of procedures is studied by studying separately the completeness of inference systems and of search strategies. Completeness proofs for resolution systems are obtained by the construction of semantic trees. These systems include minimal α-restricted binary resolution, minimal α-restricted M-clash resolution and maximal pseudo-clash resolution. Certain refinements of hyper-resolution systems with equality axioms are shown to be complete and equivalent to refinements of the pararmodulation method for dealing with equality. The completeness and efficiency of search strategies for theorem-proving problems is studied in sufficient generality to include the case of search strategies for path-search problems in graphs. The notion of theorem-proving problem is defined abstractly so as to be dual to that of and" or tree. Special attention is given to resolution problems and to search strategies which generate simpler before more complex proofs. For efficiency, a proof procedure (T,E) requires an efficient search strategy E as well as an inference system T which admits both simple proofs and relatively few redundant and irrelevant derivations. The theory of efficient proof procedures outlined here is applied to proving the increased efficiency of the usual method for deleting tautologies and subsumed clauses. Counter-examples are exhibited for both the completeness and efficiency of alternative methods for deleting subsumed clauses. The efficiency of resolution procedures is improved by replacing the single operation of resolving a clash by the two operations of generating factors of clauses and of resolving a clash of factors. Several factoring methods are investigated for completeness. Of these the m-factoring method is shown to be always more efficient than the Wos-Robinson method.The University of Edinburgh


REF-ARF: A system for solving problems stated as procedures

Classics

This paper describes an effort to design a heuristic problem-solving program which accepts problems stated in a nondeterministic programming language and applies constraint satisfaction methods and heuristic search methods to find solutions. The use of nondeterministic programming languages for stating problems is discussed, and ref, the language accepted by the problem solver arf, is described. Various extensions to ref are considered. The conceptual structure of the program is described in detail and various possibilities for extending it are discussed.


A note on mechanizing higher order logic

Classics

It seems most unlikely that one could in general write purely applicative Schonfmkel descriptions', like (5), of functions already known to one in some other form. One makes assertions in the system by writing clauses, i.e., finite collections of literals considered as disjunctions of their members, universally quantified with respect to all variables. In other words, this is a first-order language in which there is only one relation symbol, namely equality; only one function symbol, namely application; and a collection of individual constants. In particular the resolution principle may be used as sole principle; or the resolution principle together with paramodulation (Robinson and Wos 1969); or Sibert's system (Sibert 1969); or the E-resolution system of Morris (1969).


Planning and robots

Classics

Another substantial body of work on general problem-solving is that associated with the Graph Traverser program (Doran and Michie 1966, Doran 1967, Michie 1967, Doran 1968, Michie, Fleming and Oldfield 1968, Michie and Ross 1970). In this section and the next we shall consider the transition from heuristic problem-solving as exemplified by the Graph Traverser, to planning by a robot as exemplified by my own work and that of Marsh (Doran 1967, 1967a, 1968a, 1969; Marsh 1970; Michie 1967, 1968a; Popplestone 1967). In order to do this efficiently the program uses, in general, a heuristic state evaluation function and heuristic operator selection techniques to grow the search tree in the most promising direction. The following types of learning occurred in the system: (a) learning of the relationship between acts and perceptions by noting the effects of individual acts, by making generalizations about the effects of acts, and by noting that certain complicated transitions from one perceived state to another can always be achieved, (b) learning which acts to employ in particular situations and the benefits to be expected -- a kind of habit formation.


Robotologic

Classics

It is possible to render any theory decidable in a trivial way by invoking a time cutoff on reasonings and having a default mechanism for deciding the values of any expressions still not decided. There does not seem to be any way of avoiding the conclusion that the basic theory must admit an efficient theorem-proving procedure which is close to being a decision procedure. This is what the well-known unification algorithm achieves (Robinson 1965, Prawit11960). By Quine's dictum, anyone who advocates the inclusion of set theory in his theory must admit to the view that sets exist: and set theory is widely held to be at the basis of all of mathematics.


The Syntactic Analysis of English by Machine

Classics

INTRODUCTION In this paper we describe a program which will assign deep and surface structure analyses to an infinite number of English sentences.1 The design of this program differs in several respects from that of other automatic parsers presently in existence. Among the most notable of these features is the program's ability to assign syntactic labels to an infinite number of words while operating with a finite dictionary. But undoubtedly the most important decision that resulted from our attempt to construct a model for the perception of syntactic structure was our decision that the program should assign both deep and surface structure analyses to sentences. There is a good deal of evidence to suggest that the efficiency with which human beings recognize the syntactic structure of sentences is to some extent the result of their ability, having heard part of a sentence, to predict the structure of the remainder.