If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Recent activities have swung away from biology, but this will be remedied. THE application of learning machines to process control is discussed. Three approaches to the design of learning machines are shown to have more in common than is immediately apparent. These are (1) based on the use of conditional probabilities, (2) suggested by the idea that biological learning is due to facilitation of synapses and (3) based on existing statistical theory dealing with the optimisation of operating conditions. Although the application of logical-type machines to process control involves formidable complexity, design principles are evolved here for a learning machine which deals with quantitative signal and depends for its operation on the computation of correlation coefficients.
Dr. Uttley took an Honours degree in Mathematics at King's College, London where he also took a degree in Psychology and did postgraduate research in Visual Perception. At the Royal Radar establishment he designed and built analogue and digital computers. For the last five years Dr. Uttley has been working on theories of computing in the nervous system. The suggestion is based on the similarity of behaviour of these formal systems and or animals. The design of classification computers is discussed in the first paper; the design of conditional probability computers Is discussed in a third paper (Uttley, 1958, ref. 15); in both papers working models are described.
Dr. MacKay is a lecturer in Physics After graduating from St. Andrew's University in 1943 he spent three years on Radar work with the Admiralty. Since 1946, when he joined the staff of King' s College, he has been active in the development of information theory, with special interest in its bearing on the study of both natural and artificial information-systems. In 1951 a Rockefeller Fellowship enabled him to spend a year working in this field in U.S.A. His experimental work has been mainly concerned at first with highspeed analogue computation, and latterly with the informational organization of the nervous system. D. M. MACKAY SUMMARY THIS paper is concerned with some theoretical problems of securing and evaluating'intelligence' in artificial organisms, - particularly the kind of operational features that distinguish what we call'intellect' from mere ability to calculate. Among those discussed are (a) the ability to take cognizance of the'weight' as well as the structure of Information.
The conference reported in this book is one of many attempts being made throughout the country to provide such a basis of communication. Publication of these proceedings is another step toward the same objective. The Harvard Business School Conference on Automatic Data Processing was held on September 8 and 9, 1955, concurrently with a conference on Automation. These conferences were held as a service to The Associates of the Harvard Business School, a group of companies and individuals who, among other things, are interested in and support the research activities of the School. Invitations were extended to individual Associates and to controllers,- financial executives, and top management of member firms, and the program was designed entirely to meet the interests of these persons.
A program called "AM" is described which carries on simple mathematics research, defining and studying new concepts under the guidance of a large body of heuristic rules. The 250 heuristics communicate via an agenda mechanism, a global priority queue of small tasks for the program to perform, and reasons why each task is plausible (for example, "Find generalizations of'primes', because'primes' turned out to be so useful a concept"). Each concept is represented as an active, structured knowledge module. One hundred very incomplete modules are initially supplied, each one corresponding to an elementary set-theoretic concept (for example, union). This provides a definite but immense space which AM begins to explore.
Frogs and toads provide interesting parallels to the way in which humans can see the world about them, and use what they see in determining their actions. What they lack in subtlety of visually-guided behaviour, they make up for in the amenability of their behaviour and the underlying neural circuitry to experimental analysis. This paper presents three specific models of neural circuitry underlying visually-guided behaviour in frog and toad. They form an'evolutionary sequence' in that each model incorporates its predecessor as a subsystem in such a way as to explain a wider range of behaviour data in a manner consistent with current neurophysiology and anatomy. Lettvin, Maturana, McCulloch & Pitts (1959) initiated the behaviourally-oriented study of the frog visual system with their classification of retinal ganglion cells into four classes each projecting to a retinotopic map at a different depth in the optic tectum, the four maps in register.
One surprise is the extent to which the planning process seems to be event driven. For this experiment, the planning process would not be well characterized as the search of a large space for the solution to a fixed experiment. Rather, most planning in these experiments seems to be short term and in response to unexpected results in the laboratory. Considerable knowledge is used in forming new hypotheses in response to the unexpected. Furthermore, much of the geneticist's behavior seems to be directed toward exploiting serendipity.
Our understanding of any process can be measured by the extent to which a simulation we create mimics the real behavior of that process. Deviations of a simulation indicate either limitations or errors in our knowledge. In addition, these observed differences often suggest verifiable experimental hypotheses to extend our knowledge. The biochemical approach to understanding biological processes is essentially one of simulation. A biochemist typically prepares a cell-free extract that can mediate a well-described physiological process.
D E PA R T M E N T O F E L E C T R IC A L EN GINEERING U N IVE RSITY O F ILLIN O IS C H A M P A IG N, ILL. D E PA R T M E N T O F E L E CT R IC A L ENGINEERING UNIVE RSITY O F ILLIN O IS C H A M P A IG N, IL L. The three presentations published herewith were presented and discussed at the Tenth and last Conference on Cybernetics (Circular Causal and Feedback Mechanisms in Biological and Social Systems) in the usual informal style as represented in the previous four publications of this conference series. On review of the verbatim transcript of the discussion, it became evident to the Editors that in this instance the presentations repeatedly interrupted by discussion would not produce an effective publication. Accordingly each of the authors has been asked to pull his material together into a single consecutive statement, and the discussion has been omitted. New "breakthroughs" at any one spot may depend upon the integration of insights and technical skills derived from widely disparate areas of scientific investigation.