Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding
Ku, Alexander, Anderson, Peter, Patel, Roma, Ie, Eugene, Baldridge, Jason
–arXiv.org Artificial Intelligence
RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and instructions) than other VLN datasets. It emphasizes the role of language in VLN by addressing known biases in paths and eliciting more references to visible entities. Furthermore, each word in an instruction is time-aligned to the virtual poses of instruction creators and validators. We establish baseline scores for monolingual and multilingual settings and multitask learning when including Room-to-Room annotations (Anderson et al., 2018b). We also provide results for a model that learns from synchronized pose traces by focusing only on portions of the panorama attended to in human Figure 1: RxR's instructions are densely grounded to demonstrations. The size, scope and detail of the visual scene by aligning the annotator's virtual pose RxR dramatically expands the frontier for research to their spoken instructions for navigating a path.
arXiv.org Artificial Intelligence
Oct-15-2020
- Country:
- North America > United States (0.28)
- Genre:
- Research Report (0.40)
- Industry:
- Government (0.46)
- Health & Medicine (0.48)
- Technology: