AAAI Conferences

Numerous psychological studies have been carried out and several, often conflicting, models of mentat imagery have been proposed. This paper does not present another computational model, but instead treats imagery as a problem solving paradigm in artificial intelligence. We describe a concept of computational imagery [Papadias & Glasgow, 1991], which has potential applications to problems whose solutions by humans involve the use of mental imagery. As a basis for computational imagery, we define a knowledge representation scheme that brings to the foreground the most important visual and spatial properties of an image. Although psychological theories are used as a guide to these properties, we do not adhere to a strict cognitive model; whenever possible we attempt to overcome the limitations of the human information processing system.

Spatial Reasoning in Indeterminate Worlds

AAAI Conferences

A possible worlds semantics for model-based spatial reasoning is presented. In this semantics, worlds are characterized by the alternative states that result from indeterminacy or partial knowledge. A world is represented as a set of symbolic arrays, where symbols in the array map to entities in the world and the relative locations of symbols correspond to the relative locations of entities. Deduction is carried out using a model-theoretic approach in which array representations are "inspected" using primitive array functions. Nonmonotonic reasoning using array representations is also discussed.

A Model for Resolution Enhancement (Hyperacuity) in Sensory Representation

Neural Information Processing Systems

Heiligenberg (1987) recently proposed a model to explain how sensory mapscould enhance resolution through orderly arrangement of broadly tuned receptors. We have extended this model to the general case of polynomial weighting schemes and proved that the response function is also a polynomial of the same order. We further demonstrated thatthe Hermitian polynomials are eigenfunctions of the system. Finally we suggested a biologically plausible mechanism for sensory representation of external stimuli with resolution far exceeding the inter-receptor separation.



This project introduces a novel model: the Knowledge Graph Convolutional Network (KGCN). The principal idea of this work is to forge a bridge between knowledge graphs, automated logical reasoning, and machine learning, using Grakn as the knowledge graph. A KGCN can be used to create vector representations, embeddings, of any labelled set of Grakn Things via supervised learning. There are many benefits to storing complex and interrelated data in a knowledge graph, not least that the context of each datapoint can be stored in full. However, many existing machine learning techniques rely upon the existence of an input vector for each example.