Goto

Collaborating Authors


Multi-Modal Cognitive States: Augmenting the State in Cognitive Architectures

AAAI Conferences

Idealization of intelligence as an embodied activity, involving an integration of cognition, perception and the body, places the tightest constraints on the design space for AI artifacts, forcing AI to deeply understand the design tradeoffs and tricks that biology has developed. I propose that a step in the design of such artifacts is to broaden the notion of cognitive state from the current linguistic-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. This is in contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science. I develop the outlines of this proposal, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state.


A Gauntlet for Evaluating Cognitive Architectures

AAAI Conferences

We present a set of phenomena that can be used for evaluating cognitive architectures that aim at being designs for intelligent systems. To date, we know of few architectures that address more than a handful of these phenomena, and none that are able to explain all of them. Thus, these phenomena test the generality of a system and can be used to point out weaknesses in an architecture's design. The phenomena encourage autonomous learning, development of representations, and domain independence, which we argue are critical for a solution to the AI problem.


Multiple Representations in Cognitive Architectures

AAAI Conferences

The widely demonstrated ability of humans to deal with multiple representations of information has a number of important implications for a proposed standard model of the mind (SMM). In this paper we outline four and argue that a SMM must incorporate (a) multiple representational formats and (b) meta-cognitive processes that operate on them. We then describe current approaches to extend cognitive architectures with visual-spatial representations, in part to illustrate the limitations of current architectures in relation to the implications we raise but also to identify the basis upon which a consensus about the nature of these additional representations can be agreed. We believe that addressing these implications and outlining a specification for multiple representations should be a key goal for those seeking to develop a standard model of the mind.


AI and Mental Imagery

AAAI Conferences

Vision and space are prominent modalities in our experiences as humans. We live in a richly visual world, and are constantly and acutely aware of our position in space and our surroundings. In contrast to this seemingly precise awareness, we are also able to reason abstractly, use language, and construct arbitrary hypothetical scenarios. In this position paper, we present an AI system we are building to work towards human capability in visuospatial processing. We use mental imagery processing as our psychological basis and integrate it with symbolic processing. To design this system, we are considering constraints from the natural world (as described by psychology and neuroscience), and those uncovered by AI research. In doing so, we hope to address the gap between abstract reasoning and detailed perception.