The formulation of the resolution principle by J. A. Robinson (1965a) has provided the impetus for a number of recent efforts in automatic theoremproving. These programs have generated proofs of some interesting propositions of number theory, in addition to theorems of first-order functional logic and group theory. A'literal' is an n-place predicate expression or its negation F(xi, x2,.-.., x) F(xi, x2,., x „) whose arguments are individual variables, individual constants, or functional expressions. Quantifiers do not occur in these formulae, since existentially quantified variables have been replaced by functions of universally quantified ones, and the remaining variables may therefore be taken as universally quantified. For example, the number-theoretic proposition'For all x and y, if x is a divisor of y then there exists some z such that x times z equals y' may be symbolised as D(x, y)v T(x, f(x, y), y) in which D(x, y)' stands for x is a divisor of y' and 7(x, y, z)' stands for'x times y equals z'.

The two outstanding figures in the history of computer science are Alan Turing and John von Neumann, and they shared the view that logic was the key to understanding and automating computation. In particular, it was Turing who gave us in the mid-1930s the fundamental analysis, and the logical definition, of the concept of'computability by machine' and who discovered the surprising and beautiful basic fact that there exist universal machines which by suitable programming can be made to Since completing that essay I have had the benefit of extremely helpful discussions on many of the details with Professor Donald Michie and Professor I. J. Good, both of whom knew Turing well during the war years at Bletchley Park. Professor J. A. N. Lee, whose knowledge of the literature and archives of the history of computing is encyclopedic, also provided additional information, some of which is still unpublished. Further light has very recently been shed on the von Neumann side of the story by Norman Macrae's excellent biography John von Neumann (Macrae 1992). Accordingly, it seemed appropriate to undertake a more complete and thorough version of the FGCS'92 essay, focussing somewhat more on the interesting historical and biographical issues.

Seven years or so after it was first proposed (Kowalski 1974), the technique of'logic programming' today has an enthusiastic band of users and an increasingly impressive record of applications. For most of these people, logic progamming means PROLOG, the system defined and originally implemented by the Marseille group (Roussel 1975). The Strachey-Landin-Scott doctrine is of course quite compatible with the belief that one can make a machine which when given a program as input will systematically work out and deliver a description of its denotation. Landin's classic interpreter -- the SECD machine -- is the very essence of such a device. We can say that the computation of the machine is correct (if it is) only because we have a machine-independent notion of what the program is intended to denote.

People who write papers on theorem proving have unfortunately tended to ignore the practical computational questions which arise when a program is to be written to do deduction on an actual machine with real money. The present writer is to his shame an example of this. There is accordingly a very strong incentive to design the last possible ounce of efficiency into a unification program. The incentive is very much the same as that for seeking maximally efficient realizations of the elementary arithmetic operations in numerical computing -- and the problem is every bit as interesting. Whether the program about to be described is as efficient as it is possible to be, is doubtful.

Any description of a function in the lambda-calculus notation can be translated automatically into a description of the same function in a purely applicative notation which makes no use whatever of abstraction or bound variables. For example, the function V[x(x 1)] might be described as Ax(sQRT((nmEs x)((PLus x)oxE))). It seems most unlikely that one could in general write purely applicative Schonfmkel descriptions', like (5), of functions already known to one in some other form. Fortunately there is a general procedure -- the Schonfmkel procedure -- which, when applied to any expression written in the more intuitive lambda-calculus notation, will produce a correct translation of it into the Schonfinkel notation. An excellent summary can be found in Rosenbloom (1950, Chapter 3, section 4).

We present an outline of the principal features of an online interactive theorem-proving program, and a brief account of the results of some experiments with it. This program has been used to obtain proofs of new mathematical results recently announced without proof in the Notices of the American Mathematical Society. The construction of the theorem-proving program we are going to discuss here is by no means completed. Rather, the program exists in a state of continual revision and extension, and that part of it which is running and yielding results at the moment is intended to form the nucleus of a much larger and more extensive automatic deduction system in the future. The program is being developed with a number of different questions and applications in mind.

A term is an individual constant or variable or an n-adic function letter followed by n terms. A literal is an atomic formula or the negation thereof. A clause is a set of literals and is thought of as representing the universally-quantified disjunction of its members. It will sometimes be notationally convenientl to distinguish between the empty clause 0, viewed as a clause, and'other' empty sets such as the empty set of clauses, even though all these empty sets are the same set-theoretic object 4). A ground clause (term, literal) is one with no variables.

The generalized resolution principle is a single inference principle which provides, by itself, a complete formulation of the quantifier-free first-order predicate calculus with equality. It is a natural generalization of the various versions and extensions of the resolution principle, each of which it includes as special cases; but in addition it supplies all of the inferential machinery which is needed in order to be able to treat the intended interpretation of the equality symbol as'built in', and obviates the need to include special axioms of equality in the formulation of every theorem-proving problem which makes use of that notion. The completeness theory of the generalized resolution principle exploits the very intuitive and natural idea of attempting to construct counterexamples to the theorems for which proofs are wanted, and makes this the central concept. It is shown how a proof of a theorem is generated automatically by the breakdown of a sustained attempt to construct a counterexample for it. The kind of proof one gets depends entirely on the way in which the attempt to construct a counterexample is organized, and the theory places virtually no restrictions on how this shall be done.