Forbus, Kenneth D.



Remembering Marvin Minsky

AI Magazine

Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.


Using Analogy to Cluster Hand-Drawn Sketches for Sketch-Based Educational Software

AI Magazine

Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers.


Learning Qualitative Models by Demonstration

AAAI Conferences

Creating software agents that learn interactively requires the ability to learn from a small number of trials, extracting general, flexible knowledge that can drive behavior from observation and interaction. We claim that qualitative models provide a useful intermediate level of causal representation for dynamic domains, including the formulation of strategies and tactics. We argue that qualitative models are quickly learnable, and enable model-based reasoning techniques to be used to recognize, operationalize, and construct more strategic knowledge. This paper describes an approach to incrementally learning qualitative influences by demonstration in the context of a strategy game. We show how the learned model can help a system play by enabling it to explain which actions could contribute to maximizing a quantitative goal. We also show how reasoning about the model allows it to reformulate a learning problem to address delayed effects and credit assignment, such that it can improve its performance on more strategic tasks such as city placement.


Modeling the Evolution of Knowledge in Learning Systems

AAAI Conferences

How do reasoning systems that learn evolve over time? What are the properties of different learning strategies? Characterizing the evolution of these systems is important for understanding their limitations and gaining insights into the interplay between learning and reasoning. We describe an inverse ablation model for studying how large knowledge-based systems evolve: Create a small knowledge base by ablating a large KB, and simulate learning by incrementally re-adding facts, using different strategies to simulate types of learners. For each iteration, reasoning properties (including number of questions answered and run time) are collected, to explore how learning strategies and reasoning interact. We describe several experiments with the inverse ablation model, examining how two different learning strategies perform. Our results suggest that different concepts show different rates of growth, and that the density and distribution of facts that can be learned are important parameters for modulating the rate of learning.


Analogical Dialogue Acts: Supporting Learning by Reading Analogies in Instructional Texts

AAAI Conferences

Analogy is heavily used in instructional texts. We introduce the concept of analogical dialogue acts (ADAs), which represent the roles utterances play in instructional analogies. We describe a catalog of such acts, based on ideas from structure-mapping theory. We focus on the operations that these acts lead to while understanding instructional texts, using the Structure-Mapping Engine (SME) and dynamic case construction in a computational model. We test this model on a small corpus of instructional analogies expressed in simplified English, which were understood via a semi-automatic natural language system using analogical dialogue acts. The model enabled a system to answer questions after understanding the analogies that it was not able to answer without them.


Transfer Learning through Analogy in Games

AI Magazine

We report on a series of transfer learning experiments in game domains, in which we use structural analogy from one learned game to speed learning of another related game. We find that a major benefit of analogy is that it reduces the extent to which the source domain must be generalized before transfer. We describe two techniques in particular, minimal ascension and metamapping, that enable analogies to be drawn even when comparing descriptions using different relational vocabularies. Evidence for the effectiveness of these techniques is provided by a large-scale external evaluation, involving a substantial number of novel distant analogs.


Companion Cognitive Systems: A Step toward Human-Level AI

AI Magazine

We are developing Companion Cognitive Systems, a new kind of software that can be effectively treated as a collaborator. Aside from their potential utility, we believe this effort is important because it focuses on three key problems that must be solved to achieve human-level AI: Robust reasoning and learning, interactivity, and longevity. We describe the ideas we are using to develop the first architecture for Companions: analogical processing, grounded in cognitive science for reasoning and learning, sketching and concept maps to improve interactivity, and a distributed agent architecture hosted on a cluster to achieve performance and longevity. We outline some results on learning by accumulating examples derived from our first experimental version.


Companion Cognitive Systems: A Step toward Human-Level AI

AI Magazine

We are developing Companion Cognitive Systems, a new kind of software that can be effectively treated as a collaborator. Aside from their potential utility, we believe this effort is important because it focuses on three key problems that must be solved to achieve human-level AI: Robust reasoning and learning, interactivity, and longevity. We describe the ideas we are using to develop the first architecture for Companions: analogical processing, grounded in cognitive science for reasoning and learning, sketching and concept maps to improve interactivity, and a distributed agent architecture hosted on a cluster to achieve performance and longevity. We outline some results on learning by accumulating examples derived from our first experimental version.


VModel: A Visual Qualitative Modeling Environment for Middle-school Students

AI Magazine

Learning how to create, test, and revise models is a central skill in scientific reasoning. We argue that qualitative modeling provides an appropriate level of representation for helping middle-school students learn to become modelers. We describe Vmodel, a system we have created that uses visual representations and that enables middle-school students to create qualitative models. We discuss the design of the visual representation language, how Vmodel works, and evidence from school studies that indicate it is successful in helping students.