Goto

Collaborating Authors

Grounding Perception: A Developmental Approach to Sensorimotor Contingencies

arXiv.org Artificial Intelligence

To date, no clear formalism for those mechanisms has arisen in the developmental robotics community. We propose predictive modeling [16], [17] as such a computational mechanism to learn sensorimotor contingencies, and thus acquire perceptive skills. In the context of SMCT, predictive models can be autonomously estimated by the agent to capture structure in the way motor commands actively transform sensory inputs, namely sensorimotor contingencies. Predictive modeling allows the incremental acquisition of skills required in developmental robotics, while providing a computational implementation of the concept of sensorimotor contingencies. Our current implementation of the formalism proposed in this paper uses a method to cluster state transition graphs, to discover densely connected subgraphs. Note that similar methods have already been proposed by others, for example in navigation tasks for the segmentation of location data into rooms [18], or for sub-goal discovery in hierarchical reinforcement learning (e.g.


Unsupervised Emergence of Egocentric Spatial Structure from Sensorimotor Prediction

arXiv.org Artificial Intelligence

Despite its omnipresence in robotics application, the nature of spatial knowledge and the mechanisms that underlie its emergence in autonomous agents are still poorly understood. Recent theoretical works suggest that the Euclidean structure of space induces invariants in an agent's raw sensorimotor experience. We hypothesize that capturing these invariants is beneficial for sensorimotor prediction and that, under certain exploratory conditions, a motor representation capturing the structure of the external space should emerge as a byproduct of learning to predict future sensory experiences. We propose a simple sensorimotor predictive scheme, apply it to different agents and types of exploration, and evaluate the pertinence of these hypotheses. We show that a naive agent can capture the topology and metric regularity of its sensor's position in an egocentric spatial frame without any a priori knowledge, nor extraneous supervision.


Learning agent's spatial configuration from sensorimotor invariants

arXiv.org Machine Learning

The design of robotic systems is largely dictated by our purely human intuition about how we perceive the world. This intuition has been proven incorrect with regard to a number of critical issues, such as visual change blindness. In order to develop truly autonomous robots, we must step away from this intuition and let robotic agents develop their own way of perceiving. The robot should start from scratch and gradually develop perceptual notions, under no prior assumptions, exclusively by looking into its sensorimotor experience and identifying repetitive patterns and invariants. One of the most fundamental perceptual notions, space, cannot be an exception to this requirement. In this paper we look into the prerequisites for the emergence of simplified spatial notions on the basis of a robot's sensorimotor flow. We show that the notion of space as environment-independent cannot be deduced solely from exteroceptive information, which is highly variable and is mainly determined by the contents of the environment. The environment-independent definition of space can be approached by looking into the functions that link the motor commands to changes in exteroceptive inputs. In a sufficiently rich environment, the kernels of these functions correspond uniquely to the spatial configuration of the agent's exteroceptors. We simulate a redundant robotic arm with a retina installed at its end-point and show how this agent can learn the configuration space of its retina. The resulting manifold has the topology of the Cartesian product of a plane and a circle, and corresponds to the planar position and orientation of the retina.


Learning Representations of Spatial Displacement through Sensorimotor Prediction

arXiv.org Artificial Intelligence

Robots act in their environment through sequences of continuous motor commands. Because of the dimensionality of the motor space, as well as the infinite possible combinations of successive motor commands, agents need compact representations that capture the structure of the resulting displacements. In the case of an autonomous agent with no a priori knowledge about its sensorimotor apparatus, this compression has to be learned. We propose to use Recurrent Neural Networks to encode motor sequences into a compact representation, which is used to predict the consequence of motor sequences in term of sensory changes. We show that sensory prediction can successfully guide the compression of motor sequences into representations that are organized topologically in term of spatial displacement.


Representation Learning in Partially Observable Environments using Sensorimotor Prediction

arXiv.org Artificial Intelligence

In order to explore and act autonomously in an environment, an agent needs to learn from the sensorimotor information that is captured while acting. By extracting the regularities in this sensorimotor stream, it can learn a model of the world, which in turn can be used as a basis for action and exploration. This requires the acquisition of compact representations from a possibly high dimensional raw observation, which is noisy and ambiguous. In this paper, we learn sensory representations from sensorimotor prediction. We propose a model which integrates sensorimotor information over time, and project it in a sensory representation which is useful for prediction. We emphasize on a simple example the role of motor and memory for learning sensory representations.