And-or graphs and theorem-proving graphs determine the same kind of search space and differ only in the direction of search: from axioms to goals, in the case of theorem-proving graphs, and in the opposite direction, from goals to axioms, in the case of and-or graphs. We investigate the construction of a single general algorithm which covers unidirectional search both for and-or graphs and for theorem-proving graphs, bidirectional search for path-finding problems and search for a simplest solution as well as search for any solution. Indeed, many different search spaces can represent the same original problem, as in the case of resolution systems where different refinements of the resolution rule determine different search spaces for a single problem of demonstrating the unsatisfiability of a given set of clauses. In the tree representation of theorem-proving graphs (used in Machine Intelligence 5 (Kowalski 1969)), identical problems Tare generated at different times, working forwards from the initial set of axioms {D, E, F, G}.

Although it is convenient for experimental purposes to think of perception in stimulus-response terms, the immense contribution of stored data, required for prediction, makes us see perception as largely cognitive. Although there must be physiological mechanisms to carry out the cognitive logical processes, of generalising and selecting stored data, the concepts we need for understanding what the physiology is carrying out are not part of physiology. This makes parallel processing convenient for biological computing, and serial computing more convenient for man-made computers. If so, biological perception seems to demonstrate powers of parallel processing, while computers demonstrate very different powers of serial processing.

We may regard the subject of artificial intelligence as beginning with Turing's article'Computing Machinery and Intelligence' (Turing 1950) and with Shannon's (1950) discussion of how a machine might be programmed to play chess. In this case we have to say that a machine is intelligent if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment. However, we regard the construction of intelligent machines as fact manipulators as being the best bet both for constructing artificial intelligence and understanding natural intelligence. Given this notion of intelligence the following kinds of problems arise in constructing the epistemological part of an artificial intelligence: I.

In brief, we believe that programs for learning large games will need to have at their disposal good rules for learning small games. Each separate box functions as a separate learning machine: it is only brought into play when the corresponding board position arises, and its sole task is to arrive at a good choice of move for that specific position. The demon's task is to make his choices in successive plays in such a way as to maximise his expected number of wins over some specified period. By a development of Laplace's Law of Succession we can determine the probability, This defines the score associated with the node N. To make a move the automaton examines all the legal alternatives and chooses the move leading to the position having the highest associated score, ties being decided by a random choice.

These programs have generated proofs of some interesting propositions of number theory, in addition to theorems of first-order functional logic and group theory. Quantifiers do not occur in these formulae, since existentially quantified variables have been replaced by functions of universally quantified ones, and the remaining variables may therefore be taken as universally quantified. Starting with a finite number of individual constants, all possible substitutions were made in the input clauses, and the resulting conjunction of substitution instances was tested for truth-functional consistency. Davis in fact proved that any substitution instance Lo/L2 v vL„ that contains at least one unmated literal may be erased without affecting consistency, and that the test for consistency may be confined to'linked conjuncts', i.e., conjuncts (or clauses) wherein each literal Li is negated by a mate This'theorem on linked conjuncts' provides a necessary, though not a sufficient, condition of relevance: any substitution instance containing an unmated literal is demonstrably irrelevant to consistency and may be deleted forthwith, but the remaining substitution instances are not necessarily all relevant.