In earlier work, we have shown how a cognitive architecture can be augmented with a diagrammatic reasoning system to produce a bimodal cognitive architecture. In this paper, we show how this bimodal architecture is also bi-representational (multi-representational in the general case) by describing a desiderata for representational formalisms and showing how the diagrammatic representation in biSoar satisfies these requirements.
I propose that the notion of cognitive state be broadened from the current predicate-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. In contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science, in the proposed approach, central cognition incorporates parts of the perceptual machinery. I motivate and describe the proposal schematically, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state. The proposal explains our rich multimodal internal experience, and can be a key step in the realization of embodied agents. The proposed multimodal cognitive state can significantly enhance the agent's problem solving. Note: Memory, as well as the information retrieved from memory and from perception, represented in a predicate-symbolic form.
Idealization of intelligence as an embodied activity, involving an integration of cognition, perception and the body, places the tightest constraints on the design space for AI artifacts, forcing AI to deeply understand the design tradeoffs and tricks that biology has developed. I propose that a step in the design of such artifacts is to broaden the notion of cognitive state from the current linguistic-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. This is in contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science. I develop the outlines of this proposal, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state.
This paper explores the idea that the cognitive state during problem solving diagrams is bimodal, one of whose components is the traditional predicate-symbolic representation composed of relations between entities in the domain of interest, while a second component is an internal diagrammatic representation. In parallel with the operators in the symbolic representation that are based on symbol matching and inferencing, there is a set of operators in the diagrammatic component that apply perceptions to the elements of the diagram to generate information. In addition there is a set of diagram construction operations that may modify the diagram by adding, deleting and modifying the diagrammatic elements, in the service of problem solving goals. We describe the design of the diagrammatic component of the architecture, and show how the symbolic and diagrammatic modes collaborate in the solution of a problem. We end the paper with a view of the cognitive state as multi-modal, in consonance with our own phenomenal sense of experiencing the world in multiple modalities and using these senses in solving problems.
They assume that a cross between topographic and schematic maps provides the most helpful source of information for pedestrian navigation tasks. To use a map for finding one's way through an unfamiliar environment, it is necessary to know which direction one is facing with respect to the map. This orientation problem is addressed by Davies through a cognitive modeling approach. This research also focuses on map designs that are more usable for situations where orientation is a problem. Spatiotemporal planning is another task that may be assisted through a computational system.