"Analogy-based reasoning: This term is sometimes used, as a synonym to case-based reasoning, to describe the typical case-based approach... However, it is also often used to characterize methods that solve new problems based on past cases from a different domain, while typical case-based methods focus on indexing and matching strategies for single-domain cases."
– Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. Agnar Aamodt & Enric Plaza. AI Communications. IOS Press, Vol. 7: 1, pp. 39-59.
However, recently, there seems to be a new wave of interest, as indicated by many papers, monographs, edited books, and doctoral theses, in exploring aspects of similarity and analogical reasoning from various perspectives. Amid these numerous publications, Similarity and Analogical Reasoning surely stands out as the most valuable reference work on the topic, covering especially well the recent advances in the understanding of this topic, with many chapters written by leading researchers. Although it is based on a collection of papers initially presented at the Workshop on Similarity and Analogy, unlike the typical workshop proceedings, this volume is well edited and coherent in both its content and format, with a great deal of cross-references and detailed summary-comment chapters for every part of the book. Let us look at the book in detail. Because each of these chapters has a different perspective, approach, and organization, I first discuss a number of chapters one by one.
Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual’s moral norms.
Moral reasoning is important to accurately model as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used to successfully model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.
Cognitive simulation of analogical processing can be used to answer comparison questions such as: What are the similarities and/or differences between A and B, for concepts A and B in a knowledge base (KB). Previous attempts to use a general-purpose analogical reasoner to answer such questions revealed three major problems: (a) the system presented too much information in the answer, and the salient similarity or difference was not highlighted; (b) analogical inference found some incorrect differences; and (c) some expected similarities were not found. The cause of these problems was primarily a lack of a well-curated KB and, and secondarily, algorithmic deficiencies. In this paper, relying on a well-curated biology KB, we present a specific implementation of comparison questions inspired by a general model of analogical reasoning. We present numerous examples of answers produced by the system and empirical data on answer quality to illustrate that we have addressed many of the problems of the previous system.
Licato, John (Rensselaer Polytechnic Institute) | Govindarajulu, Naveen Sundar (Rensselaer Polytechnic Institute) | Bringsjord, Selmer (Rensselaer Polytechnic Institute) | Pomeranz, Michael (Rensselaer Polytechnic Institute) | Gittelson, Logan (Rensselaer Polytechnic Institute)
Gödel's proof of his famous first incompleteness theorem (G1) has quite understandably long been a tantalizing target for those wanting to engineer impressively intelligent computational systems. After all, in establishing G1, Gödel didsomething that by any metric must be classified as stunningly intelligent. We observe that it has long been understood that there is some sort of analogical relationship between the Liar Paradox (LP) and G1, and that Gödel himself appreciated and exploited the relationship. Yet the exact nature of the relationship has hitherto not been uncovered, by which we mean that the following question has not been answered: Given a description of LP,and the suspicion that it may somehow be used by a suitably programmed computing machine to find a proof of the incompleteness of Peano Arithmetic, can such a machine, provided this description as input, produce as output a complete and verifiably correct proof of G1? In this paper, we summarize engineering that entails an affirmative answer to this question. Our approach uses what we call analogico-deductive reasoning (ADR), which combines analogical and deductive reasoning to produce a full deductive proof of G1 from LP. Our engineering uses a form of ADR based on our META-R system, and a connection between the Liar Sentence in LP and Gödel's Fixed Point Lemma, from which G1 follows quickly.
Analogy is heavily used in instructional texts. We introduce the concept of analogical dialogue acts (ADAs), which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies expressed in simplified English, which were understood via a semi-automatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.