Goto

Collaborating Authors

 Machine Learning




A universal modular actor formalism for artificial intelligence

Classics

This paper proposes a modular ACTOR architecture and definitional method for artificial intelligence that is conceptually based on a single kind of object: actors [or, if you will, virtual processors, activation frames, or streams]. The formalism makes no presuppositions about the representation of primitive data structures and control structures. Such structures can be programmed, micro-coded, or hard wired 1n a uniform modular fashion. In fact it is impossible to determine whether a given object is "really" represented as a list, a vector, a hash table, a function, or a process. The architecture will efficiently run the coming generation of PLANNER-like artificial intelligence languages including those requiring a high degree of parallelism. The efficiency is gained without loss of programming generality because it only makes certain actors more efficient; it does not change their behavioral characteristics. The architecture is general with respect to control structure and does not have or need goto, interrupt, or semaphore primitives. The formalism achieves the goals that the disallowed constructs are intended to achieve by other more structured methods.In IJCAI-73: THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 20-23 August 1973, Stanford University Stanford, California.


Forecasting and Assessing the Impact of Artificial Intelligence on Society

Classics

At the present stage of research in artificial intelligence , machines are stil l remote from achieving a level of intelligence comparable in complexity to human thought. As computer applications become more sophisticated, however, and thus more influential in human affairs , it becomes increasingly important to understand both the capabilities and limitations of machine Intelligence and its potential impact on society. To this end, the artificial intelligence field was exยญamined in a systematic manner. The study was divided into two parts : (1) Delineation of areas of artificial intelligence, and postulatio " of hypothetical prodยญucts resulting from progress in the field , and (2) A judgmental portion, which involved appliยญcations and implications of the products to society . For the latter purpose, a Delphi study was conducted among experts in the artificial intelligence field to solicit their opinion concerning prototype and comยญmercial dates for the products, and the possibility and desirability of their applications and implications .In IJCAI-73: THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 20-23 August 1973, Stanford University Stanford, California.




Automatic Methods of Inductive Inference

Classics

Ph.D. thesis, Edinburgh University. This thesis is concerned with algorithms for generating generalisations-from experience. These algorithms are viewed as examples of the general concept of a hypothesis discovery system which, in its turn, is placed in a framework in which it is seen as one component in a multi-stage process which includes stages of hypothesis criticism or justification, data gathering and analysis and prediction. Formal and informal criteria, which should be satisfied by the discovered hypotheses are given. In particular, they should explain experience and be simple. The formal work uses the first-order predicate calculus. These criteria are applied to the case of hypotheses which are generalisations from experience. A formal definition of generalisation from experience, relative to a body of knowledge is developed and several syntactical simplicity measures are defined. This work uses many concepts taken from resolution theory (Robinson, 1965). We develop a set of formal criteria that must be satisfied by any hypothesis generated by an algorithm for producing generalisation from experience. The mathematics of generalisation is developed. In particular, in the case when there is no body of knowledge, it is shown that there is always a least general generalisation of any two clauses, in the generalisation ordering. (In resolution theory, a clause is an abbreviation for a disjunction of literals.) This least general generalisation is effectively obtainable. Some lattices induced by the generalisation ordering, in the case where there is no body of knowledge, are investigated. The formal set of criteria is investigated. It is shown that for a certain simplicity measure, and under the assumption that there is no body of knowledge, there always exist hypotheses which satisfy them. Generally, however, there is no algorithm which, given the sentences describing experience, will produce as output a hypothesis satisfying the formal criteria. These results persist for a wide range of other simplicity measures. However several useful cases for which algorithms are available are described, as are some general properties of the set of hypotheses which satisfy the criteria. Some connections with philosophy are discussed. It is shown that, with sufficiently large experience, in some cases, any hypothesis which satisfies the formal criteria is acceptable in the sense of Hintikka and Hilpinen (1966). The role of simplicity is further discussed. Some practical difficulties which arise because of Goodman's (1965) "grue" paradox of confirmation theory are presented. A variant of the formal criteria suggested by the work of Meltzer (1970) is discussed. This allows an effective method to be developed when this was not possible before. However, the possibility is countenanced that inconsistent hypotheses might be proposed by the discovery algorithm. The positive results on the existence of hypotheses satisfying the formal criteria are extended to include some simple types of knowledge. It is shown that they cannot be extended much further without changing the underlying simplicity ordering. A program which implements one of the decidable cases is described. It is used to find definitions in the game of noughts and crosses and in family relationships. An abstract study is made of the progression of hypothesis discovery methods through time. Some possible and some impossible behaviours of such methods are demonstrated. This work is an extension of that of Gold (1967) and Feldman (1970). The results are applied to the case of machines that discover generalisations. They are found to be markedly sensitive to the underlying simplicity ordering employed.




7 must watch documentaries on Statistics and Machine Learning

#artificialintelligence

Over the past few years, there has been a growing interest in statistics and machine learning. Today, machine learning can help us make smarter decisions, and big data controls everything in our lives. It influences how we work, shop and do business. Data even help the police determine when and where the next crime is likely to happen. But how is it all happening and how did it all start?