A framework for organizing the many disparate capabilities required of synthetic cognitive systems is proposed as a basis for assessing the status of existing and proposed cognitive architectures and systems, as well as a measure of progress towards human-level machine intelligence. This framework divides the “ingredients” of cognition into six dimensions. Capabilities within these dimension are organized roughly according to increasing levels of capability. The cognitive dimensions and their capabilities are described here as the basis for assessments of existing architectures provided in a companion paper.
As a cognitive architecture, ICARUS shares many aspects with other systems and the recent proposal for a standard model of human-like minds. But the architecture also commits to a unique combination of additional assumptions that are important. This paper discusses these aspects and proposes them as part of the standard model of human-like minds.
In this article we present DUAL-PECCS, an integrated Knowledge Representation system aimed at extending artificial capabilities in tasks such as conceptual categorization. It relies on two different sorts of cognitively inspired common-sense reasoning: prototypical reasoning and exemplars-based reasoning. Furthermore, it is grounded on the theoretical tenets coming from the dual process theory of the mind, and on the hypothesis of heterogeneous proxytypes, developed in the area of the biologically inspired cognitive architectures (BICA). The system has been integrated into the ACT-R cognitive architecture, and experimentally assessed in a conceptual categorization task, where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies. Compared to human-level categorization, the obtained results suggest that our proposal can be helpful in extending the representational and reasoning conceptual capabilities of standard cognitive artificial systems.
Idealization of intelligence as an embodied activity, involving an integration of cognition, perception and the body, places the tightest constraints on the design space for AI artifacts, forcing AI to deeply understand the design tradeoffs and tricks that biology has developed. I propose that a step in the design of such artifacts is to broaden the notion of cognitive state from the current linguistic-symbolic, Language-of-Thought framework to a multi-modal one, where perception and kinesthetic modalities participate in thinking. This is in contrast to the roles assigned to perception and motor activities as modules external to central cognition in the currently dominant theories in AI and Cognitive Science. I develop the outlines of this proposal, and describe the implementation of a bimodal version in which a diagrammatic representation component is added to the cognitive state.
Research in Psychology often involves the building of computational models to test out various theories. The usual approach is to build models using the most convenient tool available. Newell has instead proposed building models within the framework of general-purpose cognitive architectures. One advantage of this approach is that in some cases it is possible to provide more perspicuous explanations of experimental results in different but related tasks, as emerging from an underlying architecture. In this paper, we propose the use of a bimodal cognitive architecture called biSoar in modeling phenomena in spatial representation and reasoning. We show biSoar can provide an architectural explanation for the phenomena of simplification that arises in experiments associated with spatial recall. We build a biSoar model for one such spatial recall task - wayfinding, and discuss the role of the architecture in the emergence of simplification.