VIRT: Vision Instructed Transformer for Robotic Manipulation

Li, Zhuoling, Ren, Liangliang, Yang, Jinrong, Zhao, Yong, Wu, Xiaoyang, Xu, Zhenhua, Bai, Xiang, Zhao, Hengshuang

arXiv.org Artificial Intelligence 

Robotic manipulation, owing to its multi-modal nature, often faces significant training ambiguity, necessitating explicit instructions to clearly delineate the manipulation details in tasks. In this work, we highlight that vision instruction is naturally more comprehensible to recent robotic policies than the commonly adopted text instruction, as these policies are born with some vision understanding ability like human infants. Building on this premise and drawing inspiration from cognitive science, we introduce the robotic imagery paradigm, which realizes large-scale robotic data pre-training without text annotations. Additionally, we propose the robotic gaze strategy that emulates the human eye gaze mechanism, thereby guiding subsequent actions and focusing the attention of the policy on the manipulated object. Leveraging these innovations, we develop VIRT, a fully Transformer-based policy. We design comprehensive tasks using both a physical robot and simulated environments to assess the efficacy of VIRT. The results indicate that VIRT can complete very competitive tasks like "opening the lid of a tightly sealed bottle", and the proposed techniques boost the success rates of the baseline policy on diverse challenging tasks from nearly 0% to more than 65%. The key insight that supports this work is existing robotic policies are akin to human infants, who are born with visual perception and reasoning abilities but do not comprehend natural language according to previous cognitive science literature (Colombo & Mitchell, 2009). Specifically, visual signal serves as the primary information source of recent robotic policies, and the backbones of these policies are pre-trained with large-scale image datasets before the robotic data based training (He et al., 2016; Oquab et al., 2024). Therefore, the policies begin with a basic visual understanding capability like human infants. By contrast, natural language inputs are rarely incorporated into the process of pre-training these backbones, suggesting the lack of natural language knowledge in these policies.