GLaD: Geometric Latent Distillation for Vision-Language-Action Models

Guo, Minghao, Cao, Meng, Tao, Jiachen, Xu, Rongtao, Yan, Yan, Liang, Xiaodan, Laptev, Ivan, Chang, Xiaojun

arXiv.org Artificial Intelligence 

Abstract--Most existing Vision-Language-Action (VLA) models rely primarily on RGB information, while ignoring geometric cues crucial for spatial reasoning and manipulation. In this work, we introduce GLaD, a geometry-aware VLA framework that incorporates 3D geometric priors during pretraining through knowledge distillation. Rather than distilling geometric features solely into the vision encoder, we align the LLM's hidden states corresponding to visual tokens with features from a frozen geometry-aware vision transformer (VGGT), ensuring that geometric understanding is deeply integrated into the multimodal representations that drive action prediction. Pretrained on the Bridge dataset with this geometry distillation mechanism, GLaD achieves 94.1% average success rate across four LIBERO task suites, outperforming UniVLA (92.5%) which uses identical pretraining data. These results validate that geometry-aware pretraining enhances spatial reasoning and policy generalization without requiring explicit depth sensors or 3D annotations. ISION-LANGUAGE-ACTION (VLA) models have emerged as a promising paradigm for embodied intelligence, enabling robots to generate control actions directly from visual observations and natural language instructions. Recent works [1]-[4] have demonstrated impressive performance on diverse manipulation tasks by leveraging large-scale multimodal pretraining. These models typically combine powerful vision encoders [5]-[7] and large language models to learn generalizable visuomotor policies from extensive robot demonstration datasets. Despite these advances, current VLA architectures fundamentally lack geometric understanding, which represent the capability of perceiving spatial positions, 3D structures, and relational arrangements among objects in a scene--knowledge that is essential for robots to reason about where objects are, how they relate to each other, and how to interact with them effectively. Most VLAs rely on vision encoders pretrained with 2D contrastive objectives such as CLIP [5] or SigLIP [7], which excel at capturing semantic correspondences between images and text but do not encode 3D spatial information.