Interactive Learning and Analogical Chaining for Moral and Commonsense Reasoning

AAAI Conferences

Autonomous systems must consider the moral ramifications of their actions. Moral norms vary among people and depend on common sense, posing a challenge for encoding them explicitly in a system. I propose to develop a model of repeated analogical chaining and analogical reasoning to enable autonomous agents to interactively learn to apply common sense and model an individual’s moral norms.


Preface

AAAI Conferences

This symposium intends to meet the growing desire to integrate research into spatial representation and reasoning by the artificial intelligence, cognitive science and cognitive psychology communities. In assuming that, to some degree, the nature of human spatial cognition is relevant to AI research, the aim of this symposium is to: initiate an interdisciplinary dialogue to facilitate exchange of ideas and cross-fertilization among researchers; review the current influence that research into spatial cognition has on approaches to spatial representation in AI; develop a better appreciation of research into spatial representation by identifying issues that span domain and discipline boundaries; stimulate the discussion of issues in the computational realization of spatial representation. of cognitive models Historically, machine vision research has been better aligned with cognitive theories than other fields of AI, probably due to its "all encompassing" nature and historical links with psychology (many topics in machine vision, such as depth perception, object and shape categorization and recognition, have direct counterparts in cognitive psychology). Yet for fields such as robot motion planning, physical and commonsense reasoning, design and machine translation, we find that spatial representation is only one among many other issues. It is this characterization of topics by task description that has in part led to the fragmentation of research into spatial representation. The need to step back and reassess the contribution that can be made by cognitive science/psychology has come from a number of directions.


Strategy Variations in Analogical Problem Solving

AAAI Conferences

While it is commonly agreed that analogy is useful in human problem solving, exactly how analogy can and should be used remains an intriguing problem. VanLehn (1998) for instance argues that there are differences in how novices and experts use analogy, but the VanLehn and Jones (1993) Cascade model does not implement these differences. This paper analyzes several variations in strategies for using analogy to explore possible sources of novice/expert differences. We describe a series of ablation experiments on an expert model to examine the effects of strategy variations in using analogy in problem solving. We provide evidence that failing to use qualitative reasoning when encoding problems, being careless in validating analogical inferences, and not using multiple retrievals can degrade the efficiency of problem-solving.



An Analogy Ontology for Integrating Analogical Processing and First-principles Reasoning

AAAI Conferences

This paper describes an analogy ontology, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's structure-mapping theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Introduction There is mounting psychological evidence that human cognition centrally involves similarity computations over structured representations, in tasks ranging from high-level visual perception to problem solving, learning, and conceptual change [21]. Understanding how to integrate analogical processing into AI systems seems crucial to creating more humanlike reasoning systems [12].