This paper describes our work on using qualitative spatial interpretation and reasoning to achieve a natural and efficient interaction between a human and an intelligent robot on navigation tasks. The Conceptual Route Graph, which combines conventional route graphs and qualitative spatial orientation calculi, serves as an internal model of human spatial knowledge on top of the robot's quantitative representation, such that humans' qualitative route instructions can be interpreted according to the model. The tool SimSpace then visualizes and proves the interpretation using qualitative spatial reasoning. Furthermore, SimSpace will generate appropriate natural feedback if a route instruction cannot be interpreted properly.
We present an approach for enabling in-home service robots to follow natural language commands from non-expert users, with a particular focus on spatial language understanding. Specifically, we propose an extension to the semantic field model of spatial prepositions that enables the representation of dynamic spatial relations involving paths. The relevance of the proposed methodology to interactive robot learning is discussed, and the paper concludes with a description of how we plan to integrate and evaluate our proposed model with end-users.
A robotic chauffeur should reason about spatial information with a variety of scales, dimensions, and ontologies. Rich representations of both the quantitative and qualitative characteristics of space not only enable robust navigation behavior, but also permit natural communication with a human passenger. We apply a hierarchical framework of spatial knowledge inspired by human cognitive abilities, the Hybrid Spatial Semantic Hierarchy, to common navigation tasks: safe motion, localization, map-building, and route planning. We also discuss the straightforward mapping between the variety of ways in which people communicate with a chauffeur and the framework's heterogeneous concepts of spatial knowledge.
My goal is to understand human verbal route instructions by modeling and implementing the language, knowledge representation, and cognitive processes needed to communicate about spatial routes. To understand human route instructions, I ran a study of how people give and follow route instructions. I modeled the language used in the route instruction texts using standard computational linguistics techniques.
Robots coexisting with humans in their environment and performing services for them need the ability to interact with them. One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user. In this work, we present a convolutional neural network for estimating pixelwise object placement probabilities for a set of spatial relations from a single input image. During training, our network receives the learning signal by classifying hallucinated high-level scene representations as an auxiliary task. Unlike previous approaches, our method does not require ground truth data for the pixelwise relational probabilities or 3D models of the objects, which significantly expands the applicability in practical applications. Our results obtained using real-world data and human-robot experiments demonstrate the effectiveness of our method in reasoning about the best way to place objects to reproduce a spatial relation.