Goto

Collaborating Authors

 Kanerva, Pentti


Computing with Residue Numbers in High-Dimensional Representation

arXiv.org Artificial Intelligence

We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.


Vector Symbolic Architectures as a Computing Framework for Nanoscale Hardware

arXiv.org Artificial Intelligence

This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, nanoscale hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the ring-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant in modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. This latter property opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. Vector Symbolic Architectures are Turing complete, as we show, and we see them acting as a framework for computing with distributed representations in myriad AI settings. This paper serves as a reference for computer architects by illustrating techniques and philosophy of VSAs for distributed computing and relevance to emerging computing hardware, such as neuromorphic computing.


What We Mean When We Say "What's the Dollar of Mexico?": Prototypes and Mapping in Concept Space

AAAI Conferences

We assume that the brain is some kind of a computer and look at operations implied by the figurative use of language. Figurative language is pervasive, bypasses the literal meaning of what is said and is interpreted metaphorically or by analogy. Such an interpretation calls for a mapping in concept space, leading us to speculate about the nature of concept space in terms of readily computable mappings. We find that mappings of the appropriate kind are possible in high-dimensional spaces and demonstrate them with the simplest such space, namely, where the dimensions are binary. Two operations on binary vectors, one akin to addition and the other akin to multiplication, allow new representations to be composed from existing ones, and the ``multiplication'' operation is also suited for the mapping. The properties of high-dimensional spaces have been shown elsewhere to correspond to cognitive phenomena such as memory recall. The present ideas further suggest the suitability of high-dimensional representation for cognitive modeling.


Reports on the 2004 AAAI Fall Symposia

AI Magazine

The Association for the Advancement of Artificial Intelligence presented its 2004 Fall Symposium Series Friday through Sunday, October 22-24 at the Hyatt Regency Crystal City in Arlington, Virginia, adjacent to Washington, DC. The symposium series was preceded by a one-day AI funding seminar. The topics of the eight symposia in the 2004 Fall Symposia Series were: (1) Achieving Human-Level Intelligence through Integrated Systems and Research; (2) Artificial Multiagent Learning; (3) Compositional Connectionism in Cognitive Science; (4) Dialogue Systems for Health Communications; (5) The Intersection of Cognitive Science and Robotics: From Interfaces to Intelligence; (6) Making Pen-Based Interaction Intelligent and Natural; (7) Real- Life Reinforcement Learning; and (8) Style and Meaning in Language, Art, Music, and Design.


Reports on the 2004 AAAI Fall Symposia

AI Magazine

Learning) are also available as AAAI be integrated and (2) architectures Technical Reports. There through Sunday, October 22-24 at an opportunity for new and junior researchers--as was consensus among participants the Hyatt Regency Crystal City in Arlington, well as students and that metrics in machine learning, Virginia, adjacent to Washington, postdoctoral fellows--to get an inside planning, and natural language processing DC. The symposium series was look at what funding agencies expect have driven advances in those preceded on Thursday, October 21 by in proposals from prospective subfields, but that those metrics have a one-day AI funding seminar, which grantees. Representatives and program also distracted attention from how to was open to all registered attendees. The topic is of increasing interest Domains for motivating, testing, large numbers of agents, more complex with the advent of peer-to-peer network and funding this research were agent behaviors, partially observable services and with ad-hoc wireless proposed (some during our joint session environments, and mutual adaptation.


The 2002 AAAI Spring Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2002 Spring Symposium Series, held Monday through Wednesday, 25 to 27 March 2002, at Stanford University. The nine symposia were entitled (1) Acquiring (and Using) Linguistic (and World) Knowledge for Information Access; (2) Artificial Intelligence and Interactive Entertainment; (3) Collaborative Learning Agents; (4) Information Refinement and Revision for Decision Making: Modeling for Diagnostics, Prognostics, and Prediction; (5) Intelligent Distributed and Embedded Systems; (6) Logic-Based Program Synthesis: State of the Art and Future Trends; (7) Mining Answers from Texts and Knowledge Bases; (8) Safe Learning Agents; and (9) Sketch Understanding.


The 2002 AAAI Spring Symposium Series

AI Magazine

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2002 Spring Symposium Series, held Monday through Wednesday, 25 to 27 March 2002, at Stanford University. The nine symposia were entitled (1) Acquiring (and Using) Linguistic (and World) Knowledge for Information Access; (2) Artificial Intelligence and Interactive Entertainment; (3) Collaborative Learning Agents; (4) Information Refinement and Revision for Decision Making: Modeling for Diagnostics, Prognostics, and Prediction; (5) Intelligent Distributed and Embedded Systems; (6) Logic-Based Program Synthesis: State of the Art and Future Trends; (7) Mining Answers from Texts and Knowledge Bases; (8) Safe Learning Agents; and (9) Sketch Understanding.


Contour-Map Encoding of Shape for Early Vision

Neural Information Processing Systems

Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view.


Contour-Map Encoding of Shape for Early Vision

Neural Information Processing Systems

Pentti Kanerva Research Institute for Advanced Computer Science Mail Stop 230-5, NASA Ames Research Center Moffett Field, California 94035 ABSTRACT Contour maps provide a general method for recognizing two-dimensional shapes. All but blank images give rise to such maps, and people are good at recognizing objects and shapes from them. The maps are encoded easily in long feature vectors that are suitable for recognition by an associative memory. These properties of contour maps suggest a role for them in early visual perception. The prevalence of direction-sensitive neurons in the visual cortex of mammals supports this view.