Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual's moral norms.
We believe that the flexibility and robustness of common sense reasoning comes from analogical reasoning, learning, and generalization operating over massive amounts of experience. Million-fact knowledge bases are a good starting point, but are likely to be orders of magnitude smaller, in terms of ground facts, than will be needed to achieve human-like common sense reasoning. This paper describes the FIRE reasoning engine which we have built to experiment with this approach. We discuss its knowledge base organization, including coarse-coding via mentions and a persistent TMS to achieve efficient retrieval while respecting the logical environment formed by contexts and their relationships in the KB. We describe its stratified reasoning organization, which supports both reflexive reasoning (Ask, Query) and deliberative reasoning (Solve, HTN planner). Analogical reasoning, learning, and generalization are supported as part of reflexive reasoning. To show the utility of these ideas, we describe how they are used in the Companion cognitive architecture, which has been used in a variety of reasoning and learning experiments.
Moral reasoning is important to accurately model as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used to successfully model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.
Reasoning about how objects move and interact in space is pervasive in evcryday life. It is rightly considered an essential component of intelligence. Consequently, understanding spatial reasoning and developing computational models for it has been a central concern in many fields, including cognitive psychology, mathematics, robotics, vision, and artificial intelligence. Although much progress has been made over the last.
Understanding commonsense reasoning is one of the core challenges of AI. We are exploring an approach inspired by cognitive science, called analogical chaining, to create cognitive systems that can perform commonsense reasoning. Just as rules are chained in deductive systems, multiple analogies build upon each other’s inferences in analogical chaining. The cases used in analogical chaining – called common sense units – are small, to provide inferential focus and broader transfer. Importantly, such common sense units can be learned via natural language instruction, thereby increasing the ease of extending such systems. This paper describes analogical chaining, natural language instruction via microstories, and some subtleties that arise in controlling reasoning. The utility of this technique is demonstrated by performance of an implemented system on problems from the Choice of Plausible Alternatives test of commonsense causal reasoning.