grounding spatial relation
Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
Understanding spatial relations (e.g., laptop on table) in visual input is important for both humans and robots. Existing datasets are insufficient as they lack large-scale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data collection---a novel crowdsourcing method for reducing dataset bias. The 3D scenes in our dataset come in minimally contrastive pairs: two scenes in a pair are almost identical, but a spatial relation holds in one and fails in the other.
Review for NeurIPS paper: Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
Strengths: The authors have produced a modestly large 3D scene data set (about 10K scenes) in pairs of positive and negative relationships. The authors thus have taken care to generate a data set that gives as much weight to negative examples as to positive ones. They have also dealt with various language ambiguity issues, as spatial relationships for a given view may be based either on the observer's frame or the object's frame of reference. The authors argue, and demonstrate by a small study, the advantage of 3D data for determining spatial relationships over purely 2D approaches. They also show that their minimally contrastive examples allow learning with increased sample efficiency.
Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
Understanding spatial relations (e.g., laptop on table) in visual input is important for both humans and robots. Existing datasets are insufficient as they lack large-scale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data collection---a novel crowdsourcing method for reducing dataset bias.
Grounding Spatial Relations in Text-Only Language Models
Azkune, Gorka, Salaberria, Ander, Agirre, Eneko
This paper shows that text-only Language Models (LM) can learn to ground spatial relations like "left of" or "below" if they are provided with explicit location information of objects and they are properly trained to leverage those locations. We perform experiments on a verbalized version of the Visual Spatial Reasoning (VSR) dataset, where images are coupled with textual statements which contain real or fake spatial relations between two objects of the image. We verbalize the images using an off-the-shelf object detector, adding location tokens to every object label to represent their bounding boxes in textual form. Given the small size of VSR, we do not observe any improvement when using locations, but pretraining the LM over a synthetic dataset automatically derived by us improves results significantly when using location tokens. We thus show that locations allow LMs to ground spatial relations, with our text-only LMs outperforming Vision-and-Language Models and setting the new state-of-the-art for the VSR dataset. Our analysis show that our text-only LMs can generalize beyond the relations seen in the synthetic dataset to some extent, learning also more useful information than that encoded in the spatial rules we used to create the synthetic dataset itself.
- Europe > Spain > Basque Country (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)