GRAID: Enhancing Spatial Reasoning of VLMs Through High-Fidelity Data Generation
Elmaaroufi, Karim, Lai, Liheng, Svegliato, Justin, Bai, Yutong, Seshia, Sanjit A., Zaharia, Matei
–arXiv.org Artificial Intelligence
Vision Language Models (VLMs) achieve strong performance on many vision-language tasks but often struggle with spatial reasoning--a prerequisite for many applications. Empirically, we find that a dataset produced by a current training data generation pipeline has a 57.6% human validation rate. These rates stem from current limitations: single-image 3D reconstruction introduces cascading modeling errors and requires wide answer tolerances, while caption-based methods require hyper-detailed annotations and suffer from generative hallucinations. We present GRAID, built on the key insight that qualitative spatial relationships can be reliably determined from 2D geometric primitives alone. By operating exclusively on 2D bounding boxes from standard object detectors, GRAID avoids both 3D reconstruction errors and generative hallucinations, resulting in datasets that are of higher quality than existing tools that produce similar datasets as validated by human evaluations. We apply our framework to the BDD100k, NuImages, and Waymo datasets, generating over 8.5 million high-quality VQA pairs creating questions spanning spatial relations, counting, ranking, and size comparisons. We evaluate one of the datasets and find it achieves 91.16% human-validated accuracy--compared to 57.6% on a dataset generated by recent work. Critically, we demonstrate that when trained on GRAID data, models learn spatial reasoning concepts that generalize: models fine-tuned on 6 question types improve on over 10 held-out types, with accuracy gains of 47.5% on BDD and 37.9% on NuImages for Llama 3.2B 11B, and when trained on all questions types, achieve improvements on several existing benchmarks such as BLINK. The GRAID framework, datasets, and additional information can be found on our project page. Vision Language Models (VLMs) have already shown promise in a wide variety of applications, such as medical diagnosis Jin et al. (2024), biology (Maruf et al., 2025), and engineering design (Pi-card et al., 2025). However, despite this promise, a key failure mode of VLMs is that they are poor spatial reasoners, that is, they struggle to understand how objects are located in space and the spatial relationships between them. For example, in medical image analysis, Jin et al. (2024) found that VLMs were unable to recognize that skin lesions shown at different angles were the same pathology. Similarly, in robotics, Wang et al. (2025) found that without integrating explicit spatial relationships, VLMs were unable to produce high-level, executable robotic task plans.
arXiv.org Artificial Intelligence
Oct-29-2025
- Country:
- Asia
- Indonesia > Bali (0.04)
- Japan > Honshū
- Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Middle East > Israel (0.04)
- Singapore (0.04)
- North America
- Canada > Manitoba
- Westman Region > Brandon (0.04)
- United States > California
- Alameda County > Berkeley (0.04)
- Canada > Manitoba
- Asia
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.34)
- Technology: