Persson, Andreas


Learning from Implicit Information in Natural Language Instructions for Robotic Manipulations

arXiv.org Artificial Intelligence

Human-robot interaction often occurs in the form of instructions given from a human to a robot. For a robot to successfully follow instructions, a common representation of the world and objects in it should be shared between humans and the robot so that the instructions can be grounded. Achieving this representation can be done via learning, where both the world representation and the language grounding are learned simultaneously. However, in robotics this can be a difficult task due to the cost and scarcity of data. In this paper, we tackle the problem by separately learning the world representation of the robot and the language grounding. While this approach can address the challenges in getting sufficient data, it may give rise to inconsistencies between both learned components. Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot's world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human. Moreover, we demonstrate the feasibility of our approach on a scenario involving a robotic arm in the physical world.


An Ontology-Based Symbol Grounding System for Human-Robot Interaction

AAAI Conferences

This paper presents an ongoing collaboration to develop a perceptual anchoring framework which creates and maintains the symbol-percept links concerning household objects. The paper presents an approach to non-trivialize the symbol system using ontologies and allow for HRI via enabling queries about objects properties, their affordances, and their perceptual characteristics as viewed from the robot (e.g. last seen). This position paper describes in brief the objective of creating a long term perceptual anchoring framework for HRI and outlines the preliminary work done this far.