Forbus, Kenneth D.
Qualitative Event Perception: Leveraging Spatiotemporal Episodic Memory for Learning Combat in a Strategy Game
Hancock, Will, Forbus, Kenneth D.
Event perception refers to people's ability to carve up continuous experience into meaningful discrete events. We speak of finishing our morning coffee, mowing the lawn, leaving work, etc. as singular occurrences that are localized in time and space. In this work, we analyze how spatiotemporal representations can be used to automatically segment continuous experience into structured episodes, and how these descriptions can be used for analogical learning. These representations are based on Hayes' notion of histories and build upon existing work on qualitative episodic memory. Our agent automatically generates event descriptions of military battles in a strategy game and improves its gameplay by learning from this experience. Episodes are segmented based on changing properties in the world and we show evidence that they facilitate learning because they capture event descriptions at a useful spatiotemporal grain size. This is evaluated through our agent's performance in the game. We also show empirical evidence that the perception of spatial extent of episodes affects both their temporal duration as well as the number of overall cases generated.
Knowledge Management in the Companion Cognitive Architecture
Nakos, Constantine, Forbus, Kenneth D.
One of the fundamental aspects of cognitive architectures is their ability to encode and manipulate knowledge. Without a consistent, well-designed, and scalable knowledge management scheme, an architecture will be unable to move past toy problems and tackle the broader problems of cognition. In this paper, we document some of the challenges we have faced in developing the knowledge stack for the Companion cognitive architecture and discuss the tools, representations, and practices we have developed to overcome them. We also lay out a series of potential next steps that will allow Companion agents to play a greater role in managing their own knowledge. It is our hope that these observations will prove useful to other cognitive architecture developers facing similar challenges.
A Defeasible Deontic Calculus for Resolving Norm Conflicts
Olson, Taylor, Salas-Damian, Roberto, Forbus, Kenneth D.
When deciding how to act, we must consider other agents' norms and values. However, our norms are ever-evolving. We often add exceptions or change our minds, and thus norms can conflict over time. Therefore, to maintain an accurate mental model of other's norms, and thus to avoid social friction, such conflicts must be detected and resolved quickly. Formalizing this process has been the focus of various deontic logics and normative multi-agent systems. We aim to bridge the gap between these two fields here. We contribute a defeasible deontic calculus with inheritance and prove that it resolves norm conflicts. Through this analysis, we also reveal a common resolution strategy as a red herring. This paper thus contributes a theoretically justified axiomatization of norm conflict detection and resolution.
Hybrid Primal Sketch: Combining Analogy, Qualitative Representations, and Computer Vision for Scene Understanding
Forbus, Kenneth D., Chen, Kezhen, Xu, Wangcheng, Usher, Madeline
One of the purposes of perception is to bridge between sensors and conceptual understanding. Marr's Primal Sketch combined initial edge-finding with multiple downstream processes to capture aspects of visual perception such as grouping and stereopsis. Given the progress made in multiple areas of AI since then, we have developed a new framework inspired by Marr's work, the Hybrid Primal Sketch, which combines computer vision components into an ensemble to produce sketch-like entities which are then further processed by CogSketch, our model of high-level human vision, to produce both more detailed shape representations and scene representations which can be used for data-efficient learning via analogical generalization. This paper describes our theoretical framework, summarizes several previous experiments, and outlines a new experiment in progress on diagram understanding.
Sketch Worksheets in STEM Classrooms: Two Deployments
Forbus, Kenneth D. (Northwestern University) | Garnier, Bridget (University of Wisconsin-Madison) | Tikoff, Basil (University of Wisconsin-Madison) | Marko, Wayne (Northwestern University) | Usher, Madeline (Northwestern University) | McLure, Matthew (Northwestern University)
Sketching can be a valuable tool for science education, but it is currently underutilized. Sketch worksheets were developed to help change this, by using AI technology to give students immediate feedback and to give instructors assistance in grading. Sketch worksheets use visual representations automatically computed by CogSketch, which are combined with conceptual information from the OpenCyc ontology. Feedback is provided to students by comparing an instructor’s sketch to a student’s sketch, using the Structure-Mapping Engine. This paper describes our experiences in deploying sketch worksheets in two types of classes: Geoscience and AI. Sketch worksheets for introductory geoscience classes were developed by geoscientists at University of Wisconsin-Madison, authored using CogSketch and used in classes at both Wisconsin and Northwestern University. Sketch worksheets were also developed and deployed for a knowledge representation and reasoning course at Northwestern. Our experience indicates that sketch worksheets can provide helpful on-the-spot feedback to students, and significantly improve grading efficiency, to the point where sketching assignments can be more practical to use broadly in STEM education.
Analogy and Relational Representations in the Companion Cognitive Architecture
Forbus, Kenneth D. (Northwestern University) | Hinrich, Thomas (Northwestern University)
This includes the physical world, where qualitative representations have a long track record of providing human-level reasoning and performance (Forbus 2014), but also in social reasoning (for example, degrees of blame [Tomai and Forbus 2007]). Qualitative representations carve up continuous phenomena into symbolic descriptions that serve as a bridge between perception and cognition, facilitate everyday reasoning and communication, and help ground expert reasoning. We close with some lessons (Forbus, Klenk, and Hinrichs 2009) is on higher-order learned and open problems. In Newell's (1990) timescale proposed that analogy involves the construction of decomposition of cognitive phenomena, conceptual mappings between two structured, relational representations. Thus to the other, based on the correspondences), and a we approximate subsystems whose operations occur score indicating the overall quality of the match. For which one is trying to reason about, and hence inferences example, in Companions constraint checking and are made from base to target by default.
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Liang, Chen, Berant, Jonathan, Le, Quoc, Forbus, Kenneth D., Lao, Ni
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural "programmer", i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic "computer", i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
Analogical Chaining with Natural Language Instruction for Commonsense Reasoning
Blass, Joseph A. (Northwestern University) | Forbus, Kenneth D. (Northwestern University)
Understanding commonsense reasoning is one of the core challenges of AI. We are exploring an approach inspired by cognitive science, called analogical chaining, to create cognitive systems that can perform commonsense reasoning. Just as rules are chained in deductive systems, multiple analogies build upon each other’s inferences in analogical chaining. The cases used in analogical chaining – called common sense units – are small, to provide inferential focus and broader transfer. Importantly, such common sense units can be learned via natural language instruction, thereby increasing the ease of extending such systems. This paper describes analogical chaining, natural language instruction via microstories, and some subtleties that arise in controlling reasoning. The utility of this technique is demonstrated by performance of an implemented system on problems from the Choice of Plausible Alternatives test of commonsense causal reasoning.
Remembering Marvin Minsky
Forbus, Kenneth D. (Northwestern University) | Kuipers, Benjamin (University of Michigan) | Lieberman, Henry (Massachusetts Institute of Technology)
Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.