Goto

Collaborating Authors

 Logic & Formal Reasoning



And-or graphs, theorem-proving graphs, and bi-directional search

Classics

See also: Robert Kowalski. 1975. A Proof Procedure Using Connection Graphs. J. ACM 22, 4 (October 1975), 572-595.In B. Meltzer and D. Michie (Eds.), Machine intelligence 7. New York: Wiley, 167-194


Question-answering in English

Classics

The problem we consider in this paper is that of discovering formal ruleswhich will enable us to decide when a question posed in English can beanswered on the basis of one or more declarative English sentences. Toillustrate how this may be done in very simple cases we give rules whichtranslate certain declarative sentences and questions involving the quantifiers'some', 'every', 'any', and 'no' into a modified first-order predicate calculus,and answer the questions by comparing their translated forms with those ofthe declaratives. We suggest that in order to capture the meanings of morecomplex sentences it will be necessary to go beyond the first-order predicatecalculus, to a notation in which the scope of words other than quantifiersand negations is clearly indicated.Machine Intelligence 6


A Paradigm for Reasoning by Analogy

Classics

A paradigm enabling heuristic problem solving programs to exploit an analogy between a current unsolved problem and a similar but previously solved problem to simplify it s search for a solu­tion is outlined. It is developed in detail for a first-order resolution logic theorem prover. Descriptions of the paradigm, implemented LISP programs, and preliminary experimental results are presented. This is believed to be the firs t system that develops analogical information and exploits it so that a problem-solving program can speed its search.IJCAI-71, British Computer Society, London, 1971. Revised version in Artificial intelligence 2(2):147- 178, fall, 1971.


STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving

Classics

Reprinted in Readings in Planning, edited by J. Allen, J. Hendler, and A. Tate, Morgan Kaufmann Publishers, San Mateo, California, 1990. Also Reprinted in Computation and Intelligence: Collected Readings, edited by George F. Luger, AAAI Press, 1995. See also: Artificial Intelligence, Volume 2, Issues 3–4, Winter 1971, Pages 189–208 In IJCAI-71: INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE. British Computer Society, London.. Revised version in Artificial Intelligence, 2(3), pp 189-208.


Automatic Methods of Inductive Inference

Classics

Ph.D. thesis, Edinburgh University. This thesis is concerned with algorithms for generating generalisations-from experience. These algorithms are viewed as examples of the general concept of a hypothesis discovery system which, in its turn, is placed in a framework in which it is seen as one component in a multi-stage process which includes stages of hypothesis criticism or justification, data gathering and analysis and prediction. Formal and informal criteria, which should be satisfied by the discovered hypotheses are given. In particular, they should explain experience and be simple. The formal work uses the first-order predicate calculus. These criteria are applied to the case of hypotheses which are generalisations from experience. A formal definition of generalisation from experience, relative to a body of knowledge is developed and several syntactical simplicity measures are defined. This work uses many concepts taken from resolution theory (Robinson, 1965). We develop a set of formal criteria that must be satisfied by any hypothesis generated by an algorithm for producing generalisation from experience. The mathematics of generalisation is developed. In particular, in the case when there is no body of knowledge, it is shown that there is always a least general generalisation of any two clauses, in the generalisation ordering. (In resolution theory, a clause is an abbreviation for a disjunction of literals.) This least general generalisation is effectively obtainable. Some lattices induced by the generalisation ordering, in the case where there is no body of knowledge, are investigated. The formal set of criteria is investigated. It is shown that for a certain simplicity measure, and under the assumption that there is no body of knowledge, there always exist hypotheses which satisfy them. Generally, however, there is no algorithm which, given the sentences describing experience, will produce as output a hypothesis satisfying the formal criteria. These results persist for a wide range of other simplicity measures. However several useful cases for which algorithms are available are described, as are some general properties of the set of hypotheses which satisfy the criteria. Some connections with philosophy are discussed. It is shown that, with sufficiently large experience, in some cases, any hypothesis which satisfies the formal criteria is acceptable in the sense of Hintikka and Hilpinen (1966). The role of simplicity is further discussed. Some practical difficulties which arise because of Goodman's (1965) "grue" paradox of confirmation theory are presented. A variant of the formal criteria suggested by the work of Meltzer (1970) is discussed. This allows an effective method to be developed when this was not possible before. However, the possibility is countenanced that inconsistent hypotheses might be proposed by the discovery algorithm. The positive results on the existence of hypotheses satisfying the formal criteria are extended to include some simple types of knowledge. It is shown that they cannot be extended much further without changing the underlying simplicity ordering. A program which implements one of the decidable cases is described. It is used to find definitions in the game of noughts and crosses and in family relationships. An abstract study is made of the progression of hypothesis discovery methods through time. Some possible and some impossible behaviours of such methods are demonstrated. This work is an extension of that of Gold (1967) and Feldman (1970). The results are applied to the case of machines that discover generalisations. They are found to be markedly sensitive to the underlying simplicity ordering employed.



A Further Note on Inductive Generalization

Classics

In this paper, we develop the algorithm, given in Plotkin (1970), for findingthe least generalization of two clauses, into a theory of inductive generalization.The types of hypothesis which can be formed are very simple. They allhave the form: (x)Px --> Qx.We have been guided by ideas from the philosophy of science, followingBuchanan (1966). There is no search for infallible methods of generatingtrue hypotheses. Instead we define (in terms of first-order predicate calculus)the notions of data and evidence for the data. Next, some formal criteria areset up for a sentence to be a descriptive hypothesis which is a good explanationof the data, given the evidence. We can then look for the best such hypothesis.Machine Intelligence 6