If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The Inductive Net is made up of a set of horizontal lines (input lines) crossing at right angles a set of vertical lines (output lines), binary switches being placed at the intersections so formed. Each possible feature value has one horizontal line and one vertical line identified with it. A pattern is stored in the Net by exciting the horizontal lines and the vertical lines identified with its feature values, and turning on each switch which receives excitation along both the lines on which it is placed. We do this by giving the Inductive Net additional horizontal lines, each of which has a mask placed on the front of it which causes the line to fire when a particular combination of feature values occur together in the input pattern.
This thesis is concerned with algorithms for generating generalisations-from experience. These algorithms are viewed as examples of the general concept of a hypothesis discovery system which, in its turn, is placed in a framework in which it is seen as one component in a multi-stage process which includes stages of hypothesis criticism or justification, data gathering and analysis and prediction. Formal and informal criteria, which should be satisfied by the discovered hypotheses are given. In particular, they should explain experience and be simple. The formal work uses the first-order predicate calculus. These criteria are applied to the case of hypotheses which are generalisations from experience. A formal definition of generalisation from experience, relative to a body of knowledge is developed and several syntactical simplicity measures are defined. This work uses many concepts taken from resolution theory (Robinson, 1965). We develop a set of formal criteria that must be satisfied by any hypothesis generated by an algorithm for producing generalisation from experience. The mathematics of generalisation is developed. In particular, in the case when there is no body of knowledge, it is shown that there is always a least general generalisation of any two clauses, in the generalisation ordering. (In resolution theory, a clause is an abbreviation for a disjunction of literals.) This least general generalisation is effectively obtainable. Some lattices induced by the generalisation ordering, in the case where there is no body of knowledge, are investigated. The formal set of criteria is investigated. It is shown that for a certain simplicity measure, and under the assumption that there is no body of knowledge, there always exist hypotheses which satisfy them. Generally, however, there is no algorithm which, given the sentences describing experience, will produce as output a hypothesis satisfying the formal criteria. These results persist for a wide range of other simplicity measures. However several useful cases for which algorithms are available are described, as are some general properties of the set of hypotheses which satisfy the criteria. Some connections with philosophy are discussed. It is shown that, with sufficiently large experience, in some cases, any hypothesis which satisfies the formal criteria is acceptable in the sense of Hintikka and Hilpinen (1966). The role of simplicity is further discussed. Some practical difficulties which arise because of Goodman's (1965) "grue" paradox of confirmation theory are presented. A variant of the formal criteria suggested by the work of Meltzer (1970) is discussed. This allows an effective method to be developed when this was not possible before. However, the possibility is countenanced that inconsistent hypotheses might be proposed by the discovery algorithm. The positive results on the existence of hypotheses satisfying the formal criteria are extended to include some simple types of knowledge. It is shown that they cannot be extended much further without changing the underlying simplicity ordering. A program which implements one of the decidable cases is described. It is used to find definitions in the game of noughts and crosses and in family relationships. An abstract study is made of the progression of hypothesis discovery methods through time. Some possible and some impossible behaviours of such methods are demonstrated. This work is an extension of that of Gold (1967) and Feldman (1970). The results are applied to the case of machines that discover generalisations. They are found to be markedly sensitive to the underlying simplicity ordering employed. Ph.D. thesis, Edinburgh University.
This paper investigates the problem of implementing machine learning of heuristics. First, a method of representing heuristics as production rules is developed which facilitates dynamic manipulation of the heuristics by the program embodying them. Second, procedures are developed which permit a problem-solving program employing heuristics in production rule form to learn to improve its performance by evaluating and modifying existing heuristics and hypothesizing new ones, either during an explicit training process or during normal program operation. Finally, problems which merit further investigation are discussed, including the problem of defining the task environment and the problem of adapting the system to board games.
Continuing his exploration of the organization of complexity and the science of design, this new edition of Herbert Simon's classic work on artificial intelligence adds a chapter that sorts out the current themes and tools -- chaos, adaptive systems, genetic algorithms -- for analyzing complexity and complex systems. There are updates throughout the book as well. These take into account important advances in cognitive psychology and the science of design while confirming and extending the book's basic thesis: that a physical symbol system has the necessary and sufficient means for intelligent action. The chapter "Economic Reality" has also been revised to reflect a change in emphasis in Simon's thinking about the respective roles of organizations and markets in economic systems.
Abstract: Grammatical inference is an inductive process of discovering an acceptable grammar for a language, on the basis of finite samples from the language. The study has the goals of devising useful inference procedures and of demonstrating a sound formal basis for such procedures. It states the general grammatical inference problem for formal languages, reviews previous work, establishes definitions and notation, and states a position on evaluation measures. It indicates a solution for a particular class of grammatical inference problems, based on an assumed probabilistic structure.
In brief, we believe that programs for learning large games will need to have at their disposal good rules for learning small games. Each separate box functions as a separate learning machine: it is only brought into play when the corresponding board position arises, and its sole task is to arrive at a good choice of move for that specific position. The demon's task is to make his choices in successive plays in such a way as to maximise his expected number of wins over some specified period. By a development of Laplace's Law of Succession we can determine the probability, This defines the score associated with the node N. To make a move the automaton examines all the legal alternatives and chooses the move leading to the position having the highest associated score, ties being decided by a random choice.
Attempts to write'intelligent' computer programs have commonly involved the choice for attack of some particular aspect of intelligent behaviour, together with the choice of some relevant task, or range of tasks, which the program must perform. Toda (1962), in a whimsical and illuminating paper, has discussed the problems facing an automaton in a simple artificial environment. The reader may find it illuminating to imagine himself (the automaton) before a screen on which is displayed a complex pattern which changes from time to time (sequence of states). These are: 1. the subjective environment graph (figure 1), 2. the stored graph which is that portion of the subjective environment graph which the automaton has stored in its memory as a result of its experience (figure 2 (b)), and 3. the option graph which is that fragment of the stored graph which the automaton'knows' how to reach (figure 2(c)).
While still unable to outplay checker masters, the program's playing ability has been greatly improved. Limited progress has been made in the development of an improved book-learning technique and in the optimization of playing strategies as applied to the checker playing program described in an earlier paper with this same title.' While the investigation of the learning procedures forms the essential core of the experimental work, certain improvements have been made in playing techniques which must first be described. The way in which two limiting values (McCarthy's alpha and beta) are used in pruning can be seen by referring The move tree of Figure 1 redrawn to illustrate the detailed method used to keep track of the comparison values.