If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
ABSTRACT Computer systems for use by physicians have had limited impact on clinical medicine. When one examines the most common reasons for poor acceptance of medical computing systems, the potential relevance of artificial intelligence techniques becomes evident. This paper proposes design criteria for clinical computing systems and demonstrates their relationship to current research in knowledge engineering. The MYCIN System is used to illustrate the ways in which our research group has attempted to respond to the design criteria cited. My goal is to present design criteria which may encourage the use of computer programs by physicians, and to show that Al offers some particularly pertinent methods for responding to the design criteria outlined.
EPISTEMOLOGICAL PROBLEMS OF ARTIFICIAL INTELLIGENCE John McCarthy Computer Science Department Stanford University Stanford, California 94305 Introduction In (McCarthy and Hayes 1969), we proposed dividing the artificial intelligence problem into two parts - an epistemological part and a heuristic part. This lecture further explains this division, explains some of the epistemological problems, and presents some new results and approaches. The epistemological part of Al studies what kinds of facts about the world are available to an observer with given Opportunities to observe, how these facts can be represented in the memory of a computer, and what rules permit legitimate conclusions to be drawn from these facts. It leaves aside the heuristic problems of how to search spaces of possibilities and how to match patterns. Considering epistemological problems separately has the following advantages: I. The same problems of what information is available to an observer and what conclusions ...
Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization of scientific reasoning and the formalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing Al methods, such as heuristic search, for reasoning in difficult scientific problems . DENDRAL project began in 1965. Then, as now, we were concerned with the conceptual problems of designing and writing symbol manipulation programs that used substantial bodies of domain-specific scientific knowledge. In contrast, this was a time in the history of AI in which most laboratories were working on general problem solving methods, e.g., in 1965 work on resolution theorem proving was in its prime.
ABSTRACT This talk reviews those efforts in automatic theorem proving, during the past few years, which have emphasized techniques other than resolution. These include: knowledge bases, natural deduction, reduction, (rewrite rules), typing, procedures, advice, controlled forward chaining, algebraic simplification, built-in associativity and commutativity, models, analogy, and man-machine systems. Examples are given and suggestions are made for future work. Earlier work by Newell, Simon, Shaw, and Gelernter in the middle and late 1950s emphasized the heuristic approach, but the weight soon shifted to various syntactic methods culminating in a large effort on resolution type systems in the last half of the 1960s. It was about 1970 when considerable interest was revived in heuristic methods and the use of human supplied, domain dependent, knowledge.
SESSION 4B PAPER 3 TO WHAT EXTENT CAN ADMINISTRATION BE MECHANIZED? Mr. J. H. H. Merriman was educated at King's College School, Wimbledon, and King's College, University of London. He obtained his B.Sc. (Hons.) in 1935 and did Postgraduate Research at King's College London obtaining his M.Sc. Engineering Department, Radio Research Branch, Dollis Hill, in 1936 and was associated with development of long distance radio communication systems. He was Officer-in-charge Castleton radio research station 1940-8, and from 1948-5 in the Office of Engineer-in- Chief G.P.O. and responsible for microwave system development and planning.
Recent activities have swung away from biology, but this will be remedied. THE application of learning machines to process control is discussed. Three approaches to the design of learning machines are shown to have more in common than is immediately apparent. These are (1) based on the use of conditional probabilities, (2) suggested by the idea that biological learning is due to facilitation of synapses and (3) based on existing statistical theory dealing with the optimisation of operating conditions. Although the application of logical-type machines to process control involves formidable complexity, design principles are evolved here for a learning machine which deals with quantitative signal and depends for its operation on the computation of correlation coefficients.
John McCarthy, born at Boston, Mass. in 1927, received his B.S. degree in mathematics at the California Institute of Technology in 1948, and his Ph.D. also in mathematics at Princeton University in 1951. He is at present Assistant Professor of Communication Sciences at the Massachusetts Institute of Technology. His present interests are in the artificial intelligence problem, automatic programming and mathematical logic. He is co-editor with Dr. C. E. Shannon of "Automatic Studies". However, certain elementary verbal reasoning processes so simple that they can be carried out by any non--feeble--minded human have yet to be simulated by machine programs.
Summary--There is frequently more or less acrimonious discussion about artificial intelligence and intelligent machines and their place in science. Usually the discussion settles down to the reiteration of two points of view. This paper is concerned with the difference between them. Do they merely reflect two emotional or ethical biases, or is there an underlying technical judgment on which they disagree? The authors claim the latter and purport to show what that judgment is.
KEYNOTE: SOME NOTES ON THE TECHNOLOGY OF RECOGNITION Oliver G. Selfridge Lincoln Laboratory,* Massachusetts Institute of Technology Lexington, Massachusetts We are here today,I take it, to appraise what has been done, and to discern the future, if we may. I notice that a man's worth these times is in the words he speaks and writes. The understanding that may lead to a publishable paper is much to be preferred to the understanding that leads to a useful machine. "But I say unto you, that every idle word that men shall speak, they shall give account thereof in the day of judgment. For by thy words thou shalt be justified, and by thy words thou shalt be condemned."
Summary--Attempts to mechanize character reading and speech recognition have greatly accelerated in the past decade. This increased interest was prompted by the promise of computer inputs more flexible in format than punched cards or magnetic tape. Research has shown that automatic sensing can be done reliably if the task is suitably delimited. Cleverly designed marks on standard forms can be both machine and man readable. A single type font or a few fixed ones are tractable if the print quality is controlled.