Buchanan, Bruce G.
In this brief history, the beginnings of artificial intelligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems. The article ends with a brief examination of influential organizations and current issues facing the field.
Buchanan, Bruce G., Livingston, Gary R.
Knowledge discovery programs in the biological sciences require flexibility in the use of symbolic data and semantic information. Thus, the framework for the discovery program must facilitate proposing and selecting the next task to perform and performing the selected tasks. The framework we describe, called the agenda- and justificationbased framework, has several properties that are desirable in semiautonomous discovery systems: It provides a mechanism for estimating the plausibility of tasks, it uses heuristics to propose and perform tasks, and it facilitates the encoding of general discovery strategies and the use of background knowledge. Our results demonstrate that both reasons given for performing tasks and estimates of the interestingness of the concepts and hypotheses examined by HAMB contribute to its performance and that the program can discover novel, interesting relationships in biological data.
Buchanan, Bruce G.
Intelligence is a complex, natural phenomenon exhibited by humans and many other living things, without sharply defined boundaries between intelligent and unintelligent behaviour. Artificial inteliigence focuses on the phenomenon of intelligent behaviour, in humans or machines. Experimentation with computer programs allows us to manipulate their design and intervene in the environmental conditions in ways that are not possible with humans. Thus, experimentation can help us to understand what principles govern intelligent action and what mechanisms are sufficient for computers to replicate intelligent behaviours.Phil. Trans. R. Soc. Lond. A. 1994 349 1689
Buchanan, Bruce G.
Artificial intelligence, or AI, is largely an experimental science—at least as much progress has been made by building and analyzing programs as by examining theoretical questions. MYCIN is one of several well-known programs that embody some intelligence and provide data on the extent to which intelligent behavior can be programmed. As with other AI programs, its development was slow and not always in a forward direction. But we feel we learned some useful lessons in the course of nearly a decade of work on MYCIN and related programs. In this book we share the results of many experiments performed in that time, and we try to paint a coherent picture of the work. The book is intended to be a critical analysis of several pieces of related research, performed by a large number of scientists. We believe that the whole field of AI will benefit from such attempts to take a detailed retrospective look at experiments, for in this way the scientific foundations of the field will gradually be defined. It is for all these reasons that we have prepared this analysis of the MYCIN experiments.ContentsContributorsForewordAllen NewellPrefacePart One: BackgroundChapter 1—The Context of the MYCIN ExperimentsChapter 2—The Origin of Rule-Based Systems in AIRandall Davis and Jonathan J. KingPart Two: Using RulesChapter 3—The Evolution of MYCIN’s Rule FormChapter 4—The Structure of the MYCIN SystemWilliam van MelleChapter 5—Details of the Consultation SystemEdward H. ShortliffeChapter 6—Details of the Revised Therapy AlgorithmWilliam J. ClanceyPart Three: Building a Knowledge BaseChapter 7—Knowledge EngineeringChapter 8—Completeness and Consistency in a Rule-Based SystemMotoi Suwa, A. Carlisle Scott, and Edward H. ShortliffeChapter 9—Interactive Transfer of ExpertiseRandall Davis[#p4]] Part Four: Reasoning Under UncertaintyChapter 10—Uncertainty and Evidential SupportChapter 11—A Model of Inexact Reasoning in MedicineEdward H. Shortliffe and Bruce G. BuchananChapter 12—Probabilistic Reasoning and Certainty FactorsJ. Barclay AdamsChapter 13—The Dempster-Shafer Theory of EvidenceJean Gordon and Edward H. ShortliffePart Five: Generalizing MYCINChapter 14—Use of the MYCIN Inference EngineChapter 15—EMYCIN: A Knowledge Engineer’s Tool for Constructing Rule-Based Expert SystemsWilliam van Melle, Edward H. Shortliffe, and Bruce G. BuchananChapter 16—Experience Using EMYCINJames S. Bennett and Robert S. EngelmorePart Six: Explaining the ReasoningChapter 17—Explanation as a Topic of AI ResearchChapter 18—Methods for Generating ExplanationsA. Carlisle Scott, William J. Clancey, Randall Davis, and Edward H. ShortliffeChapter 19—Specialized Explanations for Dosage SelectionSharon Wraith Bennett and A. Carlisle ScottChapter 20—Customized Explanations Using Causal KnowledgeJerold W. Wallis and Edward H. ShortliffePart Seven: Using Other RepresentationsChapter 21—Other Representation FrameworksChapter 22—Extensions to the Rule-Based Formalism for a Monitoring TaskLawrence M. Fagan, John C. Kunz, Edward A. Feigenbaum, and John J. OsbornChapter 23—A Representation Scheme Using Both Frames and RulesJanice S. AikinsChapter 24—Another Look at FramesDavid E. Smith and Jan E. ClaytonPart Eight: TutoringChapter 25—Intelligent Computer-Aided InstructionChapter 26—Use of MYCIN’s Rules for TutoringWilliam J. ClanceyPart Nine: Augmenting the RulesChapter 27—Additional Knowledge StructuresChapter 28—Meta-Level KnowledgeRandall Davis and Bruce G. BuchananChapter 29—Extensions to Rules for Explanation and TutoringWilliam J. ClanceyPart Ten: Evaluating PerformanceChapter 30—The Problem of EvaluationChapter 31—An Evaluation of MYCIN’s AdviceVictor L. Yu, Lawrence M. Fagan, Sharon Wraith Bennett, William J . Clancey, A. Carlisle Scott, John F. Hannigan, Robert L. Blum, Bruce G. Buchanan, and Stanley N. CohenPart Eleven: Designing for Human UseChapter 32—Human Engineering of Medical Expert SystemsChapter 33—Strategies for Understanding Structured EnglishAlain BonnetChapter 34—An Analysis of Physicians’ AttitudesRandy L. Teach and Edward H. ShortliffeChapter 35—An Expert System for Oncology Protocol ManagementEdward H. Shortliffe, A. Carlisle Scott, Miriam B. Bischoff, A. Bruce Campbell, William van MeUe, and Charlotte D. JacobsPart Twelve: ConclusionsChapter 36—Major Lessons from This WorkEpilogAppendixReferencesName IndexSubject IndexReading, MA: Addison-Wesley Publishing Co., Inc.
Buchanan, Bruce G.
The Stanford Artificial Intelligence Project, later known as the Stanford AI Lab or SAIL, was created by Prof. John McCarthy shortly after his arrival at Stanford on 1962. As a faculty member in the Computer Science Division of the Mathematics Department, McCarthy began supervising research in artificial intelligence and timesharing systems with a few students. From this small start, McCarthy built a large and active research organization involving many other faculty and research projects as well as his own. Nevertheless, there are some important dimensions to the research that took place in the AI Lab that will try to put in historical context in this brief introduction.
Buchanan, Bruce G., Feigenbaum, Edward A.
The Heuristic Programming Project of the Stanford University Computer Science Department is a laboratory of about fifty people whose main goals are to model the nature of scientific reasoning processes in various types of scientific problems and various areas of science and medicine; and to construct expert systems -- programs that achieve high levels of performance on tasks that normally require significant human expertise for their solution.