Unsupervised Emergence of Egocentric Spatial Structure from Sensorimotor Prediction

arXiv.org Artificial Intelligence

Despite its omnipresence in robotics application, the nature of spatial knowledge and the mechanisms that underlie its emergence in autonomous agents are still poorly understood. Recent theoretical works suggest that the Euclidean structure of space induces invariants in an agent's raw sensorimotor experience. We hypothesize that capturing these invariants is beneficial for sensorimotor prediction and that, under certain exploratory conditions, a motor representation capturing the structure of the external space should emerge as a byproduct of learning to predict future sensory experiences. We propose a simple sensorimotor predictive scheme, apply it to different agents and types of exploration, and evaluate the pertinence of these hypotheses. We show that a naive agent can capture the topology and metric regularity of its sensor's position in an egocentric spatial frame without any a priori knowledge, nor extraneous supervision.


Discovering space - Grounding spatial topology and metric regularity in a naive agent's sensorimotor experience

arXiv.org Artificial Intelligence

In line with the sensorimotor contingency theory, we investigate the problem of the perception of space from a fundamental sensorimotor perspective. Despite its pervasive nature in our perception of the world, the origin of the concept of space remains largely mysterious. For example in the context of artificial perception, this issue is usually circumvented by having engineers pre-define the spatial structure of the problem the agent has to face. We here show that the structure of space can be autonomously discovered by a naive agent in the form of sensorimotor regularities, that correspond to so called compensable sensory experiences: these are experiences that can be generated either by the agent or its environment. By detecting such compensable experiences the agent can infer the topological and metric structure of the external space in which its body is moving. We propose a theoretical description of the nature of these regularities and illustrate the approach on a simulated robotic arm equipped with an eye-like sensor, and which interacts with an object. Finally we show how these regularities can be used to build an internal representation of the sensor's external spatial configuration.


A Sensorimotor Perspective on Grounding the Semantic of Simple Visual Features

arXiv.org Artificial Intelligence

In Machine Learning and Robotics, the semantic content of visual features is usually provided to the system by a human who interprets its content. On the contrary, strictly unsupervised approaches have difficulties relating the statistics of sensory inputs to their semantic content without also relying on prior knowledge introduced in the system. We proposed in this paper to tackle this problem from a sensorimotor perspective. In line with the Sensorimotor Contingencies Theory, we make the fundamental assumption that the semantic content of sensory inputs at least partially stems from the way an agent can actively transform it. We illustrate our approach by formalizing how simple visual features can induce invariants in a naive agent's sensorimotor experience, and evaluate it on a simple simulated visual system. Without any a priori knowledge about the way its sensorimotor information is encoded, we show how an agent can characterize the uniformity and edge-ness of the visual features it interacts with.


Grounding Perception: A Developmental Approach to Sensorimotor Contingencies

arXiv.org Artificial Intelligence

To date, no clear formalism for those mechanisms has arisen in the developmental robotics community. We propose predictive modeling [16], [17] as such a computational mechanism to learn sensorimotor contingencies, and thus acquire perceptive skills. In the context of SMCT, predictive models can be autonomously estimated by the agent to capture structure in the way motor commands actively transform sensory inputs, namely sensorimotor contingencies. Predictive modeling allows the incremental acquisition of skills required in developmental robotics, while providing a computational implementation of the concept of sensorimotor contingencies. Our current implementation of the formalism proposed in this paper uses a method to cluster state transition graphs, to discover densely connected subgraphs. Note that similar methods have already been proposed by others, for example in navigation tasks for the segmentation of location data into rooms [18], or for sub-goal discovery in hierarchical reinforcement learning (e.g.


Learning Representations of Spatial Displacement through Sensorimotor Prediction

arXiv.org Artificial Intelligence

Robots act in their environment through sequences of continuous motor commands. Because of the dimensionality of the motor space, as well as the infinite possible combinations of successive motor commands, agents need compact representations that capture the structure of the resulting displacements. In the case of an autonomous agent with no a priori knowledge about its sensorimotor apparatus, this compression has to be learned. We propose to use Recurrent Neural Networks to encode motor sequences into a compact representation, which is used to predict the consequence of motor sequences in term of sensory changes. We show that sensory prediction can successfully guide the compression of motor sequences into representations that are organized topologically in term of spatial displacement.