In the analysis, the molecular scene is reconstructed and interpreted in an iterative procedure which proceeds from an initially low resolution uninterpreted image to a fully interpreted high resolution map. To accomplish the goals of molecular scene analysis, however, requires the representation of protein structures in a knowledge base that can be easily accessed to retrieve general and specific properties of protein structure at different levels of abstraction (amino acid, secondary structure, molecule, etc.).
The widely demonstrated ability of humans to deal with multiple representations of information has a number of important implications for a proposed standard model of the mind (SMM). In this paper we outline four and argue that a SMM must incorporate (a) multiple representational formats and (b) meta-cognitive processes that operate on them. We then describe current approaches to extend cognitive architectures with visual-spatial representations, in part to illustrate the limitations of current architectures in relation to the implications we raise but also to identify the basis upon which a consensus about the nature of these additional representations can be agreed. We believe that addressing these implications and outlining a specification for multiple representations should be a key goal for those seeking to develop a standard model of the mind.
This obviously has only a limited applicability. Retrieval by image content involves the problem of obtaining image attributes. This problem can be solved by applying model-based reasoning with deformable models. Here, available models are previously recognized images and deformable models are used for image alignment and thus recognition. Another important use of deformable models is automated image feature extraction. Here, a deformable model can be used to find image features, such as shape or size. This can be used to classify the image, which in turn can be used in conjunction with other symbolic attributes during decision making. Image feature extraction is identification of image metadata, which can be used as: indexes to the image database, supporting scalable similarity-based retrieval; models of prototypical features (e.g., tumors in brain scans), implementing model-based retrieval; a mechanism for content-based compression (i.e., prototypes and differences axe stored).
This paper presents a new Retinotopic Reasoning (R2) cognitive architecture that is inspired by studies of visual mental imagery in people. R2 is a hybrid symbolic-connectionist architecture, with certain components of the system represented in propositional, symbolic form, but with a primary working memory store that contains visual ``mental'' images that can be created and manipulated by the system. R2 is not intended to serve as a full-fledged, stand-alone cognitive architecture, but rather is a specialized system focusing on how visual mental imagery can be represented, learned, and used in support of intelligent behavior. Examples illustrate how R2 can be used to model human visuospatial cognition on several different standardized cognitive tests, including the Raven's Progressive Matrices test, the Block Design test, the Embedded Figures test, and the Paper Folding test.
Vision and space are prominent modalities in our experiences as humans. We live in a richly visual world, and are constantly and acutely aware of our position in space and our surroundings. In contrast to this seemingly precise awareness, we are also able to reason abstractly, use language, and construct arbitrary hypothetical scenarios. In this position paper, we present an AI system we are building to work towards human capability in visuospatial processing. We use mental imagery processing as our psychological basis and integrate it with symbolic processing. To design this system, we are considering constraints from the natural world (as described by psychology and neuroscience), and those uncovered by AI research. In doing so, we hope to address the gap between abstract reasoning and detailed perception.