Goto

Collaborating Authors


Uncertain Reasoning in Rule-Based Systems Using PRM

AAAI Conferences

Widely adopted for more than 20 years in industrial fields, business rules offer the opportunity to non-IT users to define decision-making policies in a simple and intuitive way. When used conjointly with probabilistic graphical models (PGM) their expressiveness increase by introducing the notion of probabilistic production rules (PPR). In this paper we will present a new model for PPR and suggest a way to handle the combinatorial explosion due to the number of parents of aggregators in PGM such as Bayesian Networks and Probabilistic Relational Models in an industrial context where marginals should be computed rapidly.


Model-Based Diagnosis under Real-World Constraints

AI Magazine

I report on my experience over the past few years in introducing automated, model-based diagnostic technologies into industrial settings. In partic-ular, I discuss the competition that this technology has been receiving from handcrafted, rule-based diagnostic systems that has set some high standards that must be met by model-based systems before they can be viewed as viable alternatives. The battle between model-based and rule-based approaches to diagnosis has been over in the academic literature for many years, but the situation is different in industry where rule-based systems are dominant and appear to be attractive given the considerations of efficiency, embeddability, and cost effectiveness. My goal in this article is to provide a perspective on this competition and discuss a diagnostic tool, called DTOOL/CNETS, that I have been developing over the years as I tried to address the major challenges posed by rule-based systems. In particular, I discuss three major features of the developed tool that were either adopted, designed, or innovated to address these challenges: (1) its compositional modeling approach, (2) its structure-based computational approach, and (3) its ability to synthesize embeddable diagnostic systems for a variety of software and hardware platforms.


Learning to Play Using Low-Complexity Rule-Based Policies: Illustrations through Ms. Pac-Man

Journal of Artificial Intelligence Research

In this article we propose a method that can deal with certain combinatorial reinforcement learning tasks. We demonstrate the approach in the popular Ms. Pac-Man game. We define a set of high-level observation and action modules, from which rule-based policies are constructed automatically. In these policies, actions are temporally extended, and may work concurrently. The policy of the agent is encoded by a compact decision list. The components of the list are selected from a large pool of rules, which can be either hand-crafted or generated automatically. A suitable selection of rules is learnt by the cross-entropy method, a recent global optimization algorithm that fits our framework smoothly. Cross-entropy-optimized policies perform better than our hand-crafted policy, and reach the score of average human players. We argue that learning is successful mainly because (i) policies may apply concurrent actions and thus the policy space is sufficiently rich, (ii) the search is biased towards low-complexity policies and therefore, solutions with a compact description can be found quickly if they exist.


Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project

Classics

Artificial intelligence, or AI, is largely an experimental science—at least as much progress has been made by building and analyzing programs as by examining theoretical questions. MYCIN is one of several well-known programs that embody some intelligence and provide data on the extent to which intelligent behavior can be programmed. As with other AI programs, its development was slow and not always in a forward direction. But we feel we learned some useful lessons in the course of nearly a decade of work on MYCIN and related programs. In this book we share the results of many experiments performed in that time, and we try to paint a coherent picture of the work. The book is intended to be a critical analysis of several pieces of related research, performed by a large number of scientists. We believe that the whole field of AI will benefit from such attempts to take a detailed retrospective look at experiments, for in this way the scientific foundations of the field will gradually be defined. It is for all these reasons that we have prepared this analysis of the MYCIN experiments.ContentsContributorsForewordAllen NewellPrefacePart One: BackgroundChapter 1—The Context of the MYCIN ExperimentsChapter 2—The Origin of Rule-Based Systems in AIRandall Davis and Jonathan J. KingPart Two: Using RulesChapter 3—The Evolution of MYCIN’s Rule FormChapter 4—The Structure of the MYCIN SystemWilliam van MelleChapter 5—Details of the Consultation SystemEdward H. ShortliffeChapter 6—Details of the Revised Therapy AlgorithmWilliam J. ClanceyPart Three: Building a Knowledge BaseChapter 7—Knowledge EngineeringChapter 8—Completeness and Consistency in a Rule-Based SystemMotoi Suwa, A. Carlisle Scott, and Edward H. ShortliffeChapter 9—Interactive Transfer of ExpertiseRandall Davis[#p4]] Part Four: Reasoning Under UncertaintyChapter 10—Uncertainty and Evidential SupportChapter 11—A Model of Inexact Reasoning in MedicineEdward H. Shortliffe and Bruce G. BuchananChapter 12—Probabilistic Reasoning and Certainty FactorsJ. Barclay AdamsChapter 13—The Dempster-Shafer Theory of EvidenceJean Gordon and Edward H. ShortliffePart Five: Generalizing MYCINChapter 14—Use of the MYCIN Inference EngineChapter 15—EMYCIN: A Knowledge Engineer’s Tool for Constructing Rule-Based Expert SystemsWilliam van Melle, Edward H. Shortliffe, and Bruce G. BuchananChapter 16—Experience Using EMYCINJames S. Bennett and Robert S. EngelmorePart Six: Explaining the ReasoningChapter 17—Explanation as a Topic of AI ResearchChapter 18—Methods for Generating ExplanationsA. Carlisle Scott, William J. Clancey, Randall Davis, and Edward H. ShortliffeChapter 19—Specialized Explanations for Dosage SelectionSharon Wraith Bennett and A. Carlisle ScottChapter 20—Customized Explanations Using Causal KnowledgeJerold W. Wallis and Edward H. ShortliffePart Seven: Using Other RepresentationsChapter 21—Other Representation FrameworksChapter 22—Extensions to the Rule-Based Formalism for a Monitoring TaskLawrence M. Fagan, John C. Kunz, Edward A. Feigenbaum, and John J. OsbornChapter 23—A Representation Scheme Using Both Frames and RulesJanice S. AikinsChapter 24—Another Look at FramesDavid E. Smith and Jan E. ClaytonPart Eight: TutoringChapter 25—Intelligent Computer-Aided InstructionChapter 26—Use of MYCIN’s Rules for TutoringWilliam J. ClanceyPart Nine: Augmenting the RulesChapter 27—Additional Knowledge StructuresChapter 28—Meta-Level KnowledgeRandall Davis and Bruce G. BuchananChapter 29—Extensions to Rules for Explanation and TutoringWilliam J. ClanceyPart Ten: Evaluating PerformanceChapter 30—The Problem of EvaluationChapter 31—An Evaluation of MYCIN’s AdviceVictor L. Yu, Lawrence M. Fagan, Sharon Wraith Bennett, William J . Clancey, A. Carlisle Scott, John F. Hannigan, Robert L. Blum, Bruce G. Buchanan, and Stanley N. CohenPart Eleven: Designing for Human UseChapter 32—Human Engineering of Medical Expert SystemsChapter 33—Strategies for Understanding Structured EnglishAlain BonnetChapter 34—An Analysis of Physicians’ AttitudesRandy L. Teach and Edward H. ShortliffeChapter 35—An Expert System for Oncology Protocol ManagementEdward H. Shortliffe, A. Carlisle Scott, Miriam B. Bischoff, A. Bruce Campbell, William van MeUe, and Charlotte D. JacobsPart Twelve: ConclusionsChapter 36—Major Lessons from This WorkEpilogAppendixReferencesName IndexSubject IndexReading, MA: Addison-Wesley Publishing Co., Inc.