Moral reasoning is important to accurately model as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used to successfully model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.
This paper presents an approach to creating flexible general-logic representations from language for use in high-level reasoning tasks in cognitive modeling. These representations are grounded in a large-scale ontology and emphasize the need for semantic breadth at the cost of syntactic breadth. The task-independent interpretation process allows task-specific pragmatics to guide the interpretation process. In the context of a particular cognitive model, we discuss our use of limited abduction for interpretation and show results of its performance.
Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual’s moral norms.
We describe how we are using natural language techniques to develop systems that can automatically encode a range of input materials for cognitive simulations. We start by summarizing this type of problem, and the components we are using. We then describe three projects that are using this common infrastructure: learning from multimodal materials, modeling decision making in moral dilemmas, and modeling conceptual change in development.
This paper describes an analogy ontology, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's structure-mapping theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Introduction There is mounting psychological evidence that human cognition centrally involves similarity computations over structured representations, in tasks ranging from high-level visual perception to problem solving, learning, and conceptual change . Understanding how to integrate analogical processing into AI systems seems crucial to creating more humanlike reasoning systems .