If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Utah-based HireVue uses video interviews to examine candidates' word choice, voice inflection, and micro gestures for subtle clues, such as whether their facial expressions contradict their words. Yale School of Management professor Jason Dana, who has studied hiring for years, recently made waves with a high-profile article in the New York Times that excoriated job interviews as useless. But when Google examined its internal evidence, it found that grades, test scores, and a school's pedigree weren't a good predictor of job success. Google created a program called qDroid, which drafts questions for interviewers based on how qDroid parses the data the applicant provided on the qualities Google emphasizes.
A research team from Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School (HMS) recently developed artificial intelligence (AI) methods aimed at training computers to interpret pathology images, with the long-term goal of building AI-powered systems to make pathologic diagnoses more accurate. "Our AI method is based on deep learning, a machine-learning algorithm used for a range of applications including speech recognition and image recognition," explained pathologist Andrew Beck, MD, PhD, Director of Bioinformatics at the Cancer Research Institute at Beth Israel Deaconess Medical Center (BIDMC) and an Associate Professor at Harvard Medical School. In an objective evaluation in which researchers were given slides of lymph node cells and asked to determine whether or not they contained cancer, the team's automated diagnostic method proved accurate approximately 92 percent of the time, explained Khosla, adding, "This nearly matched the success rate of a human pathologist, whose results were 96 percent accurate." "But the truly exciting thing was when we combined the pathologist's analysis with our automated computational diagnostic method, the result improved to 99.5 percent accuracy," said Beck.
From attributes 8 3 Implementation 8 3.1 Overview of Meta-Rulegen 3.2 Algorithm 10 3.2.1 Approach from object rule 11 3.2.2 Machine learning can be used to formulate new meta-level knowledge. A small MYCIN -like medical diagnosis system was constructed as a starting point. Two heuristic methods are used in a program called Meta-Rulegen to form metarules from the knowledge base in the diagnosis system. In a preliminary study, 63 metarules were formed automatically and, by judiciously selecting a set of metarules, the efficiency of the diagnosis system can be improved significantly without degrading the quality of advice. This study suggests that metarules can be learned automatically to improve the efficiency of rule-based systems. 1 Introduction The value of meta-level knowledge for guiding the invocation, construction, and explanatioi. of object-level rules in an expert system has been demonstrated by Davis . In this paper we explore the use of machine learning methods for ...
Reprinted by permission of the author. Published in the Proceedings of a Symposium on Computers in Medicine, Annual Meeting, California Medical Association, Anaheim, CA., February 1984. Edward H. Shortlitre, M.D., Ph.D. Division of General Internal Medicine, Department of Medicine Stanford University School of Medicine Stanford, California 94305 Alt;iough computing technology is playing an increasingly important role in medicine, systems designed to advise physicians on diagnosis or therapy selection have remained largely experimental to date. Despite diverse research efforts, and a literature on computer-aided diagnosis that has numbered over 1500 references in the last 20 years, clinical consultation programs have failed to achieve wide acceptance. The reasons for attempting to develop such systems are self-evident.
I AUTOMATED REASONING: LOGICAL AND HEURISTIC 1 2 WHAT ARE EXPERT SYSTEMS? 3 HISTORICAL BACKGROUND The idea of automated reasoning is founded on the fact that computers are general-purpose symbol manipulation devices, and not mere numerical calculating machines. Symbolic inference since the time of Aristotle has involved the combination of symbolic expressions. One line of attack on the problem of how to use computers for automated reasoning is the logical one: exploit the syntax and rules of deductive logic as expressed by Aristotle, Russell & Whitehead, or Church Extend the formalism where necessary to represent useful concepts not easily expressed. But focus sharply on retaining the logical consistency that these formalisms provide. A primary research problem is finding computational methods that are efficient ent,,,gh for this theorem-proving approach to be applied to reasoning problems of real-world complexity.
Heuristic Programming Project Report No. HPP 82-37 May 1982 COMPUTER-BASED CLINICAL DECISION AIDS: SOME PRACTICAL CONSIDERATIONS Edward H. Shortliffe, MD, PhD Division of General Internal Medicine Department of Medicine Stanford University School of Medicine Stanford, California 94305 To be presented at AMIA Congress Hyatt on Union Square San Francisco, California 2-5 May 1982 * Dr. Shortliffe is recipient of Research Career Development Award LM00048 from the National Library of Medicine. AMIA Congress 82 E.H. Shortliffe AKTRACT Medical decision making research has tended to emphasize the generation of optimal decisions, an issue which is central to the development of clinically useful consultation programs. This paper stresses the need to consider other theoretical and practical issues that are pertinent if consultation systems are to be accepted by physicians. Since adequate decision making performance remains an essential component of acceptable systems, the paper suggests c-iteria for selecting clinical problems that may be amenable to short-term implementation using state-of-the-art techniques. Introducticn At the beginning of a third decade of research into the development of computer-based diagnostic aids, it is appropriate for medical computer scientists to assess the strides that have been taken, the barriers that remain, and the optimal strategies for furthering the field in the years ahead.
Reprinted with permission from Science, Vol. After twenty-five years of use, the very name -- combining as it does a highly immodest ambition with a suggestion of deceit -- still has the power to provoke controversy. Research in artificial intelligence has several goals. One is the development of computational models of intelligent behavior, including both its cognitive and perceptual aspects. A more engineering-oriented goal is the development of computer programs that can solve problems normally thought to require human intelligence.
Edward H. Shortliffe, Jul 1981 HP? -81 -9 EVALUATINq EXPERT SYSTEMS Edward H. Shortliffe Heuristic Programming Project Departments of Medicine and Computer Science Stanford University Stanford, California 94305 July 1981 This paper is the author's contribution to Chapter 6. in the volume EXPERT SYSTEMS, edited by R. Hayes-Roth, D. Lenat, and D. Waterman:4 The full article is entitled "Evaluation of expert systems: issues and case studies", and is authored by J. Gaschnig, P. Klahr, H. Pople, E. Shortliffe. The volume is the result of a Workshop on Expert Systems held in San Diego in August 1980 and sponsored by the Rand Corporation, ARPA, and the NSF. Parts of Chapters 7 & 8. Reprinted with permission. Issues in the Evaluation of Expert Systems EVALUATING EXPERT SYSTEMS 1 Issues in the Evaluation of Expert Systems 4e have been discussing the reasons for doing evaluations of expert systems, or for having reservations about getting involved in the evaluation process, but we have not addressed the nature of the evaluation process itself. In this section we define ma,ly of the parameters that determine an appropriate design for an evaluation experiment.
Reprinted by permission of the Canadian Society for Computational Studies of Intelligence. Reprinted frcm: Proceedings of the CSCSI/SCEIO Conference 14-16 May 1980 University of Victoria Victoria, British Columbia pp. California 94305 ABSTFACT Computer systems for use by physicians have had limited impact on clinical medicine. The mYCIN System is used to illustrate tne ways In which Our research group has attempted to respond to the design criteria cited. My goal is to present design criteria which may encourage the use of computer programs oy physicians, and to show that Al offers some particularly pertinent methods for responding to the design criteria outlined.
Despite diverse research efforts, and a literature on computer-aided diagnosis that has numbered at least 1,000 references in the last 20 years, clinical consultation programs have seldom been used other than in experimental environments. The reasons for attempting to develop such systems are self-evident. Growth in medical knowledge has far surpassed the ability of the single practitioner to master it all, and the computer's superior information poacessing capacity thereby offers a natural appeal. Furthermore, the reasoning processes of medical experts are poorly understood; attempts to model expert decision making necer.sarily New insights that result may also allow us more adequately to teach medical students and house staff the techniques for reaching good decisions, rather than merely to offer a collection of facts that they must independently learn to use coherently.