Goto

Collaborating Authors

 Forbus, Kenneth D.


Remembering Marvin Minsky

AI Magazine

Marvin Minsky, one of the pioneers of artificial intelligence and a renowned mathematicial and computer scientist, died on Sunday, 24 January 2016 of a cerebral hemmorhage. He was 88. In this article, AI scientists Kenneth D. Forbus (Northwestern University), Benjamin Kuipers (University of Michigan), and Henry Lieberman (Massachusetts Institute of Technology) recall their interactions with Minksy and briefly recount the impact he had on their lives and their research. A remembrance of Marvin Minsky was held at the AAAI Spring Symposium at Stanford University on March 22. Video remembrances of Minsky by Danny Bobrow, Benjamin Kuipers, Ray Kurzweil, Richard Waldinger, and others can be on the sentient webpage1 or on youtube.com.


Software Social Organisms: Implications for Measuring AI Progress

AI Magazine

In this article I argue that achieving human-level AI is equivalent to learning how to create sufficiently smart software social organisms. This implies that no single test will be sufficient to measure progress. Instead, evaluations should be organized around showing increasing abilities to participate in our culture, as apprentices. This provides multiple dimensions within which progress can be measured, including how well different interaction modalities can be used, what range of domains can be tackled, what human-normed levels of knowledge they are able to acquire, as well as others. I begin by motivating the idea of software social organisms, drawing on ideas from other areas of cognitive science, and provide an analysis of the substrate capabilities that are needed in social organisms in terms closer to what is needed for computational modeling. Finally, the implications for evaluation are discussed.


Analogical Abduction and Prediction: Their Impact on Deception

AAAI Conferences

To deceive involves corrupting the predictions or explanations of others. A deeper understanding of how this works thus requires modeling how human abduction and prediction operate. This paper proposes that most human abduction and prediction are carried out via analogy, over experience and generalizations constructed from experience. I take experience to include cultural products, such as stories. How analogical reasoning and learning can be used to make predictions and explanations is outlined, along with both the advantages of this approach and the technical questions that it raises. Concrete examples involving deception and counter-deception are used to explore these ideas further.


Extending Analogical Generalization with Near-Misses

AAAI Conferences

Concept learning is a central problem for cognitive systems. Generalization techniques can help organize examples by their commonalities, but comparisons with non-examples, near-misses, can provide discrimination. Early work on near-misses required hand-selected examples by a teacher who understood the learner’s internal representations. This paper introduces Analogical Learning by Integrating Generalization and Near-misses (ALIGN) and describes three key advances. First, domain-general cognitive models of analogical processes are used to handle a wider range of examples. Second, ALIGN’s analogical generalization process constructs multiple probabilistic representations per concept via clustering, and hence can learn disjunctive concepts. Finally, ALIGN uses unsupervised analogical retrieval to find its own near-miss examples. We show that ALIGN out-performs analogical generalization on two perceptual data sets: (1) hand-drawn sketches; and (2) geospatial concepts from strategy-game maps.


Learning Plausible Inferences from Semantic Web Knowledge by Combining Analogical Generalization with Structured Logistic Regression

AAAI Conferences

Fast and efficient learning over large bodies of commonsense knowledge is a key requirement for cognitive systems. Semantic web knowledge bases provide an important new resource of ground facts from which plausible inferences can be learned. This paper applies structured logistic regression with analogical generalization (SLogAn) to make use of structural as well as statistical information to achieve rapid and robust learning. SLogAn achieves state-of-the-art performance in a standard triplet classification task on two data sets and, in addition, can provide understandable explanations for its answers.


Moral Decision-Making by Analogy: Generalizations versus Exemplars

AAAI Conferences

Moral reasoning is important to accurately model as AI systems become ever more integrated into our lives. Moral reasoning is rapid and unconscious; analogical reasoning, which can be unconscious, is a promising approach to model moral reasoning. This paper explores the use of analogical generalizations to improve moral reasoning. Analogical reasoning has already been used to successfully model moral reasoning in the MoralDM model, but it exhaustively matches across all known cases, which is computationally intractable and cognitively implausible for human-scale knowledge bases. We investigate the performance of an extension of MoralDM to use the MAC/FAC model of analogical retrieval over three conditions, across a set of highly confusable moral scenarios.


Using Analogy to Cluster Hand-Drawn Sketches for Sketch-Based Educational Software

AI Magazine

Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers.


Using Analogy to Cluster Hand-Drawn Sketches for Sketch-Based Educational Software

AI Magazine

One of the major challenges to building intelligent educational software is determining what kinds of feedback to give learners. Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. This article describes how analogical retrieval and generalization can be used to cluster automatically analyzed hand-drawn sketches incorporating both spatial and conceptual information. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers. Common answer clusters can be used for the design of targeted feedback and for assessment.


Graph Traversal Methods for Reasoning in Large Knowledge-Based Systems

AAAI Conferences

Commonsense reasoning at scale is a core problem for cognitive systems. In this paper, we discuss two ways in which heuristic graph traversal methods can be used to generate plausible inference chains. First, we discuss how Cyc’s predicate-type hierarchy can be used to get reasonable answers to queries. Second, we explain how connection graph-based techniques can be used to identify script-like structures. Finally, we demonstrate through experiments that these methods lead to significant improvement in accuracy for both Q/A and script construction.


Automatic Extraction of Efficient Axiom Sets from Large Knowledge Bases

AAAI Conferences

Efficient reasoning in large knowledge bases is an important problem for AI systems. Hand-optimization of reasoning becomes impractical as KBs grow, and impossible as knowledge is automatically added via knowledge capture or machine learning. This paper describes a method for automatic extraction of axioms for efficient inference over large knowledge bases, given a set of query types and information about the types of facts in the KB currently as well as what might be learned. We use the highly right skewed distribution of predicate connectivity in large knowledge bases to prune intractable regions of the search space. We show the efficacy of these techniques via experiments using queries from a learning by reading system. Results show that these methods lead to an order of magnitude improvement in time with minimal loss in coverage.