"Analogy-based reasoning: This term is sometimes used, as a synonym to case-based reasoning, to describe the typical case-based approach... However, it is also often used to characterize methods that solve new problems based on past cases from a different domain, while typical case-based methods focus on indexing and matching strategies for single-domain cases."
– Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. Agnar Aamodt & Enric Plaza. AI Communications. IOS Press, Vol. 7: 1, pp. 39-59.
Analogical reasoning is effective in capturing linguistic regularities. This paper proposes an analogical reasoning task on Chinese. After delving into Chinese lexical knowledge, we sketch 68 implicit morphological relations and 28 explicit semantic relations. A big and balanced dataset CA8 is then built for this task, including 17813 questions. Furthermore, we systematically explore the influences of vector representations, context features, and corpora on analogical reasoning. With the experiments, CA8 is proved to be a reliable benchmark for evaluating Chinese word embeddings.
However, recently, there seems to be a new wave of interest, as indicated by many papers, monographs, edited books, and doctoral theses, in exploring aspects of similarity and analogical reasoning from various perspectives. Amid these numerous publications, Similarity and Analogical Reasoning surely stands out as the most valuable reference work on the topic, covering especially well the recent advances in the understanding of this topic, with many chapters written by leading researchers. Although it is based on a collection of papers initially presented at the Workshop on Similarity and Analogy, unlike the typical workshop proceedings, this volume is well edited and coherent in both its content and format, with a great deal of cross-references and detailed summary-comment chapters for every part of the book. Let us look at the book in detail. Because each of these chapters has a different perspective, approach, and organization, I first discuss a number of chapters one by one.
Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual’s moral norms.
Moral reasoning is important to accurately model as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used to successfully model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.
Cognitive simulation of analogical processing can be used to answer comparison questions such as: What are the similarities and/or differences between A and B, for concepts A and B in a knowledge base (KB). Previous attempts to use a general-purpose analogical reasoner to answer such questions revealed three major problems: (a) the system presented too much information in the answer, and the salient similarity or difference was not highlighted; (b) analogical inference found some incorrect differences; and (c) some expected similarities were not found. The cause of these problems was primarily a lack of a well-curated KB and, and secondarily, algorithmic deficiencies. In this paper, relying on a well-curated biology KB, we present a specific implementation of comparison questions inspired by a general model of analogical reasoning. We present numerous examples of answers produced by the system and empirical data on answer quality to illustrate that we have addressed many of the problems of the previous system.
Licato, John (Rensselaer Polytechnic Institute) | Govindarajulu, Naveen Sundar (Rensselaer Polytechnic Institute) | Bringsjord, Selmer (Rensselaer Polytechnic Institute) | Pomeranz, Michael (Rensselaer Polytechnic Institute) | Gittelson, Logan (Rensselaer Polytechnic Institute)
Gödel's proof of his famous first incompleteness theorem (G1) has quite understandably long been a tantalizing target for those wanting to engineer impressively intelligent computational systems. After all, in establishing G1, Gödel didsomething that by any metric must be classified as stunningly intelligent. We observe that it has long been understood that there is some sort of analogical relationship between the Liar Paradox (LP) and G1, and that Gödel himself appreciated and exploited the relationship. Yet the exact nature of the relationship has hitherto not been uncovered, by which we mean that the following question has not been answered: Given a description of LP,and the suspicion that it may somehow be used by a suitably programmed computing machine to find a proof of the incompleteness of Peano Arithmetic, can such a machine, provided this description as input, produce as output a complete and verifiably correct proof of G1? In this paper, we summarize engineering that entails an affirmative answer to this question. Our approach uses what we call analogico-deductive reasoning (ADR), which combines analogical and deductive reasoning to produce a full deductive proof of G1 from LP. Our engineering uses a form of ADR based on our META-R system, and a connection between the Liar Sentence in LP and Gödel's Fixed Point Lemma, from which G1 follows quickly.
Analogy is heavily used in instructional texts. We introduce the concept of analogical dialogue acts (ADAs), which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies expressed in simplified English, which were understood via a semi-automatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.
This paper describes a program called Hob that uses analogical mappings across narratives to drive an experimental conversational system. Analogical mappings are used to drive internal reasoning processes and supply response templates. The knowledge base is written in simple English, and consists of three parts: a dictionary, a collection of facts and heuristics, and a collection of stories that notionally correspond to an experience base. Internally, knowledge is stored and used in the form of near-canonical English parse trees. Thus the schema is not hardwired, but is derived from the knowledge base as interpreted under the rules and implications of English grammar. An experimental "sellbot" application is described, and example runs are presented.
We present a computational model, MoralDM, which integrates several AI techniques in order to model recent psychological findings on moral decision-making. Current theories of moral decision-making extend beyond pure utilitarian models by relying on contextual factors that vary with culture. MoralDM uses a natural language system to produce formal representations from psychological stimuli, to reduce tailorability. The impacts of secular versus sacred values are modeled via qualitative reasoning, using an order of magnitude representation. MoralDM uses a combination of first-principles reasoning and analogical reasoning to determine consequences and utilities when making moral judgments. We describe how MoralDM works and show that it can model psychological results and improve its performance via accumulating examples.
A major limitation of today's computer games is the shallowness of interactions with non-player characters. To build up relationships with players, NPCs should be able to remember shared experiences, including conversations, and shape their responses accordingly. We believe that progress in AI has already reached the point where research on using NLP and large KBs in games could lead to important new capabilities. We describe our Listener Architecture for conversational games, which has been implemented in a toolkit used to make short experimental games. Episodic memory plays a central role, using analogical reasoning over a library of previous conversations with the player. Examples and scale-up issues are discussed.