Results


Generalization as Search

Classics (Collection 2)

We learn (memorize) multiplication tables, learn (discover how) to walk, learn (build UP an understanding of, then an ability to synthesize) languages. Many subtasks and capabilities are involved in these various kinds of learning. One capability central to many kinds of learning is the ability to generalize: to take into account a large number of specific observations, then to extract and retain the important common features that characterize classes of these observations. This generalization problem has received considerable attention for two decades in the fields of Artificial Intelligence, Psychology, and Pattern Recognition (e.g., [Bruner, The results so far have been tantalizing: Partially successful generalization programs have been written for problems ranging from learning fragments of spoken English to learning rules of Chemical spectroscopy. But comparing alternative strategies, and developing a general understanding of techniques has been difficult because of differences in data representations, terminology, and problem characteristics.


Opening Address

Classics (Collection 2)

R. L. GREGORY, Psychological Laboratory, Cambridge Discussion on paper 5 683 6 Some questions concerning the explanation of learning 691 in animals MR. A. J. WATS01,4 Psychological Laboratory, Cambridge Discussion on paper 6 721 7 Information, redundancy and decay of the memory trace 729 DR. J. MERRIMAN, HM Treasury, London 5 Automatic control by visual signals DR. W. K. TAYLOR, University College, London Discussion on paper 5 6 An analysis of non--mathematical data--processing MR.


SESSION 3 PAPER 5 LEARNING MACHINES

Classics (Collection 2)

Recent activities have swung away from biology, but this will be remedied. THE application of learning machines to process control is discussed. Three approaches to the design of learning machines are shown to have more in common than is immediately apparent. These are (1) based on the use of conditional probabilities, (2) suggested by the idea that biological learning is due to facilitation of synapses and (3) based on existing statistical theory dealing with the optimisation of operating conditions. Although the application of logical-type machines to process control involves formidable complexity, design principles are evolved here for a learning machine which deals with quantitative signal and depends for its operation on the computation of correlation coefficients.


SESSION 3 PAPER 6 PANDEMONIUM: A PARADIGM FOR LEARNING

Classics (Collection 2)

G. Selfridge was born in London 10 May PANDEMONIUM: A PARADIGM FOR LEARNING O. G. SELFRIDGE INTRODUCTION WE are proposing here a model of a process which we claim can adaptively improve itself to handle certain pattern recognition problems Which cannot be adequately specified in advance. Such problems are usual when trying' to build a machine to Imitate any one of a very large class of human data processing techniques. A speech typewriter is a good example of something that very many people have been trying unsuccessfully to build for some time. We do not suggest that we have proposed a model which can learn to typewrite from merely hearing speech. Pandemonium does not, however, seem on paper to have the same kinds of inherent restrictions or inflexibility that many previous proposals have had.


SESSION 3 PAPER 4 TWO THEOREMS OF STATISTICAL SEPARABILIiY IN THE PERCEPTRON

Classics (Collection 2)

Frank Rosenblatt, born in New Rochelle, New York, U.S.A., July 11, 1928, graduated from Cornell University in 1950, and received a PhD degree in psychology, from the same university, in 1956. He was engaged in research on schizophrenia, as a Fellow of the U.S. Public Health Service, 1951-1953. He has made contributions to techniques of multivariate analysis, psychopathology, information processing and control systems, and physiological brain models. He is currently a Research Psychologist at the Cornell Aeronautical Laboratory, Inc., in Buffalo, New York, where he Is Project Engineer responsible for Project PARA (Perceiving and Recognizing Automaton). FRANK ROSENBLATT SUMMARY A THEORETICAL brain model, the perceptron, has been developed at the Cornell Aeronautical Laboratory, In Buffalo, New York.


SESSION 1 PAPER CONDITIONAL PROBABILITY COMPUTING IN A NERVOUS SYSTEM

Classics (Collection 2)

Dr. Uttley took an Honours degree in Mathematics at King's College, London where he also took a degree in Psychology and did postgraduate research in Visual Perception. At the Royal Radar establishment he designed and built analogue and digital computers. For the last five years Dr. Uttley has been working on theories of computing in the nervous system. The suggestion is based on the similarity of behaviour of these formal systems and or animals. The design of classification computers is discussed in the first paper; the design of conditional probability computers Is discussed in a third paper (Uttley, 1958, ref. 15); in both papers working models are described.


SESSION 1 PAPER 2 OPERATIONAL ASPECTS OF INTELLECT

Classics (Collection 2)

Dr. MacKay is a lecturer in Physics After graduating from St. Andrew's University in 1943 he spent three years on Radar work with the Admiralty. Since 1946, when he joined the staff of King' s College, he has been active in the development of information theory, with special interest in its bearing on the study of both natural and artificial information-systems. In 1951 a Rockefeller Fellowship enabled him to spend a year working in this field in U.S.A. His experimental work has been mainly concerned at first with highspeed analogue computation, and latterly with the informational organization of the nervous system. D. M. MACKAY SUMMARY THIS paper is concerned with some theoretical problems of securing and evaluating'intelligence' in artificial organisms, - particularly the kind of operational features that distinguish what we call'intellect' from mere ability to calculate. Among those discussed are (a) the ability to take cognizance of the'weight' as well as the structure of Information.


SESSION 1 PAPER 1 SOME METHODS OF ARTIFICIAL INTELLIGENCE AND HEURISTIC PROGRAMMING

Classics (Collection 2)

Marvin Lee Minsky was born in New York on 9th August, 1927. He received his B.A from Harvard in 1950 and Ph.D in Mathematics from Princeton in 1954. For the next three years he was a member of the Harvard University Society of Fellows, and in 1957-58 was staff member of the M.I.T. Lincoln Laboratories. At present he is Assistant Professor of Mathematics at M.I.T. where he is giving a course in Automata and Artificial Intelligence and is also staff member of the Research Laboratory of Electronics. Particular attention is given to processes involving pattern recognition, learning, planning ahead, and the use of analogies or?models!.


Mechanisation of Thought Processes

Classics (Collection 2)

LONDON: HER MAJESTY'S STATIONERY OFFICE LONDON: HER MAJESTY'S STATIONERY OFFICE 1959 . R. L. GREGORY, Psychological Laboratory, Cambridge 669 Discussion on paper 5 683 6 Some questions concerning the explanation of learning in animals MR. A. J. WATSON, Psychological Laboratory, Cambridge 691 Discussion on paper 6 721 7 Information, redundancy and decay of the memory trace DR. Has worked on various aspects of vision, including eye-movements; spatial properties of receptive fields in the frog's retina; changes in temporal and spatial summation with level of adaptation; and th:esholds After a delay, which may vary from about 100 msec. The argument is put forward that the storage and utilization of this enormous sensory inflow would be made easier if the redundancy of the incoming messages was reduced. Some physiological mechanisms which would start to do this are already known, but these appear to have arisen by evolutionary adaptation of the organism to types of redundancy which are always present in the environment of the species.


Mechanisation of Thought Processes

Classics (Collection 2)

LONDON: HER MAJESTY'S STATIONERY OFFICE LONDON: HER MAJESTY'S STATIONERY OFFICE 1959 TO-PREF ACE This Symposium was the tenth in the present series of N.P.L. Symposia. Two are normally held each year -- one on a subject of general industrial interest and the other on a theme of more academic research. This Symposium was held to bring together scientists studying artificial thinking, character and pattern recognition, learning, mechanical language translation, biology, automatic programming, industrial planning and clerical mechanization. It was felt that a common theme in all these fields was "The Mechanization of Thought Processes" and that an interchange of ideas between these specialists would be very valuable. It is unfortunate that meeting accommodation in the Laboratory is at present very restricted, and a very large number of people had to be turned away.