Kollar, Thomas
Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation
Tellex, Stefanie (Massachusetts Institute of Technology) | Kollar, Thomas (Massachusetts Institute of Technology) | Dickerson, Steven (Massachusetts Institute of Technology) | Walter, Matthew R. (Massachusetts Institute of Technology) | Banerjee, Ashis Gopal (Massachusetts Institute of Technology) | Teller, Seth (Massachusetts Institute of Technology) | Roy, Nicholas (Massachusetts Institute of Technology)
This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs, dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command's hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as "Put the tire pallet on the truck." The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot's performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system's performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.
A Discriminative Model for Understanding Natural Language Route Directions
Kollar, Thomas (Massachusetts Institute of Technology) | Tellex, Stefanie (Massachusetts Institute of Technology) | Roy, Nicholas (Massachusetts Institute of Technology)
To be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. However, determining the correct sequence of actions in response to a set of spoken instructions is a complex decision-making problem. There is a "semantic gap" between the high-level symbolic models of the world that people use, and the low-level models of geometry, state dynamics, and perceptions that robots use. In this paper, we show how this gap can be bridged by inferring the best sequence of actions from a linguistic description and environmental features. This work improves upon previous work in three ways. First, by using a conditional random field (CRF), we learn the relative weight of environmental and linguistic features, enabling the system to learn the meanings of words and reducing the modeling effort in learning how to follow commands. Second, a number of long-range features are added, which help the system to use additional structure in the problem. Finally, given a natural language command, we infer both the referred path and landmark directly, thereby requiring the algorithm to pick a landmark by which it should navigate. The CRF is demonstrated to have 15% error on a held-out dataset, when compared with 39% error for a Markov random field (MRF). Finally, by analyzing the additional annotations necessary for this work, we find that natural language route directions map sequentially onto the corresponding path and landmarks 99.6% of the time. In addition, the size of the referred landmark varies from 0m 2 to 1964m 2 and the length of the referred path varies from 0 m to 40.83 m .