Smart Information Flow Technologies (SIFT)
Extending Analogical Generalization with Near-Misses
McLure, Matthew D. (Northwestern University) | Friedman, Scott E. (Smart Information Flow Technologies (SIFT)) | Forbus, Kenneth D. (Northwestern University)
Concept learning is a central problem for cognitive systems. Generalization techniques can help organize examples by their commonalities, but comparisons with non-examples, near-misses, can provide discrimination. Early work on near-misses required hand-selected examples by a teacher who understood the learner’s internal representations. This paper introduces Analogical Learning by Integrating Generalization and Near-misses (ALIGN) and describes three key advances. First, domain-general cognitive models of analogical processes are used to handle a wider range of examples. Second, ALIGN’s analogical generalization process constructs multiple probabilistic representations per concept via clustering, and hence can learn disjunctive concepts. Finally, ALIGN uses unsupervised analogical retrieval to find its own near-miss examples. We show that ALIGN out-performs analogical generalization on two perceptual data sets: (1) hand-drawn sketches; and (2) geospatial concepts from strategy-game maps.
The Location of Words: Evidence from Generation and Spatial Description
McDonald, David D. (Smart Information Flow Technologies (SIFT))
Language processing architectures today are rarely designed to provide psychologically plausible accounts of their representations and algorithms. Engineering decisions dominate. This has led to words being seen as an incidental part of the architecture: the repository of all of language’s idiosyncratic aspects. Drawing on a body of past and ongoing research by myself and others I have concluded that this view of words is wrong. Words are actually present at the most abstract, pre-linguistic levels of the NLP architecture and that there are phenomena in language use that are best accounted for by assuming that concepts are words.