This paper explores the use of analogy to learn about properties of sketches. Sketches often convey conceptual relationships between entities via the visual relationships between their depictions in the sketch. Understanding these conventions is an important part of adapting to a user. This paper describes how learning by accumulating examples can be used to make suggestions about such relationships in new sketches. We describe how sketches are being used in Companion Cognitive Systems to illustrate one context in which this problem arises. We describe how existing cognitive simulations of analogical matching and retrieval are used to generate suggestions for new sketches based on analogies with prior sketches. Two experiments provide evidence as to the accuracy and coverage of this technique.
One of the major challenges to building intelligent educational software is determining what kinds of feedback to give learners. Useful feedback makes use of models of domain-specific knowledge, especially models that are commonly held by potential students. To empirically determine what these models are, student data can be clustered to reveal common misconceptions or common problem-solving strategies. This paper describes how analogical retrieval and generalization can be used to cluster automatically analyzed hand-drawn sketches incorporating both spatial and conceptual information. We use this approach to cluster a corpus of hand-drawn student sketches to discover common answers. Common answer clusters can be used for the design of targeted feedback and for assessment.
Fast and efficient learning over large bodies of commonsense knowledge is a key requirement for cognitive systems. Semantic web knowledge bases provide an important new resource of ground facts from which plausible inferences can be learned. This paper applies structured logistic regression with analogical generalization (SLogAn) to make use of structural as well as statistical information to achieve rapid and robust learning. SLogAn achieves state-of-the-art performance in a standard triplet classification task on two data sets and, in addition, can provide understandable explanations for its answers.
Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.
To deceive involves corrupting the predictions or explanations of others. A deeper understanding of how this works thus requires modeling how human abduction and prediction operate. This paper proposes that most human abduction and prediction are carried out via analogy, over experience and generalizations constructed from experience. I take experience to include cultural products, such as stories. How analogical reasoning and learning can be used to make predictions and explanations is outlined, along with both the advantages of this approach and the technical questions that it raises. Concrete examples involving deception and counter-deception are used to explore these ideas further.