Yuan, Wentao
Gemini Robotics: Bringing AI into the Physical World
Gemini Robotics Team, null, Abeyruwan, Saminda, Ainslie, Joshua, Alayrac, Jean-Baptiste, Arenas, Montserrat Gonzalez, Armstrong, Travis, Balakrishna, Ashwin, Baruch, Robert, Bauza, Maria, Blokzijl, Michiel, Bohez, Steven, Bousmalis, Konstantinos, Brohan, Anthony, Buschmann, Thomas, Byravan, Arunkumar, Cabi, Serkan, Caluwaerts, Ken, Casarini, Federico, Chang, Oscar, Chen, Jose Enrique, Chen, Xi, Chiang, Hao-Tien Lewis, Choromanski, Krzysztof, D'Ambrosio, David, Dasari, Sudeep, Davchev, Todor, Devin, Coline, Di Palo, Norman, Ding, Tianli, Dostmohamed, Adil, Driess, Danny, Du, Yilun, Dwibedi, Debidatta, Elabd, Michael, Fantacci, Claudio, Fong, Cody, Frey, Erik, Fu, Chuyuan, Giustina, Marissa, Gopalakrishnan, Keerthana, Graesser, Laura, Hasenclever, Leonard, Heess, Nicolas, Hernaez, Brandon, Herzog, Alexander, Hofer, R. Alex, Humplik, Jan, Iscen, Atil, Jacob, Mithun George, Jain, Deepali, Julian, Ryan, Kalashnikov, Dmitry, Karagozler, M. Emre, Karp, Stefani, Kew, Chase, Kirkland, Jerad, Kirmani, Sean, Kuang, Yuheng, Lampe, Thomas, Laurens, Antoine, Leal, Isabel, Lee, Alex X., Lee, Tsang-Wei Edward, Liang, Jacky, Lin, Yixin, Maddineni, Sharath, Majumdar, Anirudha, Michaely, Assaf Hurwitz, Moreno, Robert, Neunert, Michael, Nori, Francesco, Parada, Carolina, Parisotto, Emilio, Pastor, Peter, Pooley, Acorn, Rao, Kanishka, Reymann, Krista, Sadigh, Dorsa, Saliceti, Stefano, Sanketi, Pannag, Sermanet, Pierre, Shah, Dhruv, Sharma, Mohit, Shea, Kathryn, Shu, Charles, Sindhwani, Vikas, Singh, Sumeet, Soricut, Radu, Springenberg, Jost Tobias, Sterneck, Rachel, Surdulescu, Razvan, Tan, Jie, Tompson, Jonathan, Vanhoucke, Vincent, Varley, Jake, Vesom, Grace, Vezzani, Giulia, Vinyals, Oriol, Wahid, Ayzaan, Welker, Stefan, Wohlhart, Paul, Xia, Fei, Xiao, Ted, Xie, Annie, Xie, Jinyu, Xu, Peng, Xu, Sichun, Xu, Ying, Xu, Zhuo, Yang, Yuxiang, Yao, Rui, Yaroshenko, Sergey, Yu, Wenhao, Yuan, Wentao, Zhang, Jingwei, Zhang, Tingnan, Zhou, Allan, Zhou, Yuxiang
Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. This report introduces a new family of AI models purposefully designed for robotics and built upon the foundation of Gemini 2.0. We present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Gemini Robotics executes smooth and reactive movements to tackle a wide range of complex manipulation tasks while also being robust to variations in object types and positions, handling unseen environments as well as following diverse, open vocabulary instructions. We show that with additional fine-tuning, Gemini Robotics can be specialized to new capabilities including solving long-horizon, highly dexterous tasks, learning new short-horizon tasks from as few as 100 demonstrations and adapting to completely novel robot embodiments. This is made possible because Gemini Robotics builds on top of the Gemini Robotics-ER model, the second model we introduce in this work. Gemini Robotics-ER (Embodied Reasoning) extends Gemini's multimodal reasoning capabilities into the physical world, with enhanced spatial and temporal understanding. This enables capabilities relevant to robotics including object detection, pointing, trajectory and grasp prediction, as well as multi-view correspondence and 3D bounding box predictions. We show how this novel combination can support a variety of robotics applications. We also discuss and address important safety considerations related to this new class of robotics foundation models. The Gemini Robotics family marks a substantial step towards developing general-purpose robots that realizes AI's potential in the physical world.
Vote-Tree-Planner: Optimizing Execution Order in LLM-based Task Planning Pipeline via Voting
Zhang, Chaoyuan, Li, Zhaowei, Yuan, Wentao
Integrating large language models (LLMs) into closed-loop robotic task planning has become increasingly popular within embodied artificial intelligence. Previous efforts mainly focused on leveraging the strong reasoning abilities of LLMs to enhance task planning performance while often overlooking task planning efficiency and executability due to repetitive queries to LLMs. This paper addresses the synergy between LLMs and task planning systems, aiming to minimize redundancy while enhancing planning effectiveness. Specifically, building upon Prog-Prompt and the high-level concept of Tree-Planner, we propose Vote-Tree-Planner. This sampling strategy utilizes votes to guide plan traversal during the decision-making process. Our approach is motivated by a straightforward observation: assigning weights to agents during decision-making enables the evaluation of critical paths before execution. With this simple vote-tree construction, our method further improves the success rate and reduces the number of queries to LLMs. The experimental results highlight that our Vote-Tree-Planner demonstrates greater stability and shows a higher average success rate and goal condition recall on the unseen dataset compared with previous baseline methods. These findings underscore the potential of the Vote-Tree-Planner to enhance planning accuracy, reliability, and efficiency in LLM-based planning systems.
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Duan, Jiafei, Pumacay, Wilbert, Kumar, Nishanth, Wang, Yi Ru, Tian, Shulin, Yuan, Wentao, Krishna, Ranjay, Fox, Dieter, Mandlekar, Ajay, Guo, Yijie
Robotic manipulation in open-world settings requires not only task execution but also the ability to detect and learn from failures. While recent advances in vision-language models (VLMs) and large language models (LLMs) have improved robots' spatial reasoning and problem-solving abilities, they still struggle with failure recognition, limiting their real-world applicability. We introduce AHA, an open-source VLM designed to detect and reason about failures in robotic manipulation using natural language. By framing failure detection as a free-form reasoning task, AHA identifies failures and provides detailed, adaptable explanations across different robots, tasks, and environments. We fine-tuned AHA using FailGen, a scalable framework that generates the first large-scale dataset of robotic failure trajectories, the AHA dataset. FailGen achieves this by procedurally perturbing successful demonstrations from simulation. Despite being trained solely on the AHA dataset, AHA generalizes effectively to real-world failure datasets, robotic systems, and unseen tasks. It surpasses the second-best model (GPT-4o in-context learning) by 10.3% and exceeds the average performance of six compared models including five state-of-the-art VLMs by 35.3% across multiple metrics and datasets. We integrate AHA into three manipulation frameworks that utilize LLMs/VLMs for reinforcement learning, task and motion planning, and zero-shot trajectory generation. AHA's failure feedback enhances these policies' performances by refining dense reward functions, optimizing task planning, and improving sub-task verification, boosting task success rates by an average of 21.4% across all three tasks compared to GPT-4 models.
RoboPoint: A Vision-Language Model for Spatial Affordance Prediction for Robotics
Yuan, Wentao, Duan, Jiafei, Blukis, Valts, Pumacay, Wilbert, Krishna, Ranjay, Murali, Adithyavairavan, Mousavian, Arsalan, Fox, Dieter
From rearranging objects on a table to putting groceries into shelves, robots must plan precise action points to perform tasks accurately and reliably. In spite of the recent adoption of vision language models (VLMs) to control robot behavior, VLMs struggle to precisely articulate robot actions using language. We introduce an automatic synthetic data generation pipeline that instruction-tunes VLMs to robotic domains and needs. Using the pipeline, we train RoboPoint, a VLM that predicts image keypoint affordances given language instructions. Compared to alternative approaches, our method requires no real-world data collection or human demonstration, making it much more scalable to diverse environments and viewpoints. In addition, RoboPoint is a general model that enables several downstream applications such as robot navigation, manipulation, and augmented reality (AR) assistance. Our experiments demonstrate that RoboPoint outperforms state-of-the-art VLMs (GPT-4o) and visual prompting techniques (PIVOT) by 21.8% in the accuracy of predicting spatial affordance and by 30.5% in the success rate of downstream tasks. Project website: https://robo-point.github.io.
Evaluating Robustness of Visual Representations for Object Assembly Task Requiring Spatio-Geometrical Reasoning
Ku, Chahyon, Winge, Carl, Diaz, Ryan, Yuan, Wentao, Desingh, Karthik
This paper primarily focuses on evaluating and benchmarking the robustness of visual representations in the context of object assembly tasks. Specifically, it investigates the alignment and insertion of objects with geometrical extrusions and intrusions, commonly referred to as a peg-in-hole task. The accuracy required to detect and orient the peg and the hole geometry in SE(3) space for successful assembly poses significant challenges. Addressing this, we employ a general framework in visuomotor policy learning that utilizes visual pretraining models as vision encoders. Our study investigates the robustness of this framework when applied to a dual-arm manipulation setup, specifically to the grasp variations. Our quantitative analysis shows that existing pretrained models fail to capture the essential visual features necessary for this task. However, a visual encoder trained from scratch consistently outperforms the frozen pretrained models. Moreover, we discuss rotation representations and associated loss functions that substantially improve policy learning. We present a novel task scenario designed to evaluate the progress in visuomotor policy learning, with a specific focus on improving the robustness of intricate assembly tasks that require both geometrical and spatial reasoning. Videos, additional experiments, dataset, and code are available at https://bit.ly/geometric-peg-in-hole .
M2T2: Multi-Task Masked Transformer for Object-centric Pick and Place
Yuan, Wentao, Murali, Adithyavairavan, Mousavian, Arsalan, Fox, Dieter
With the advent of large language models and large-scale robotic datasets, there has been tremendous progress in high-level decision-making for object manipulation. These generic models are able to interpret complex tasks using language commands, but they often have difficulties generalizing to out-of-distribution objects due to the inability of low-level action primitives. In contrast, existing task-specific models excel in low-level manipulation of unknown objects, but only work for a single type of action. To bridge this gap, we present M2T2, a single model that supplies different types of low-level actions that work robustly on arbitrary objects in cluttered scenes. M2T2 is a transformer model which reasons about contact points and predicts valid gripper poses for different action modes given a raw point cloud of the scene. Trained on a large-scale synthetic dataset with 128K scenes, M2T2 achieves zero-shot sim2real transfer on the real robot, outperforming the baseline system with state-of-the-art task-specific models by about 19% in overall performance and 37.5% in challenging scenes where the object needs to be re-oriented for collision-free placement. M2T2 also achieves state-of-the-art results on a subset of language conditioned tasks in RLBench. Videos of robot experiments on unseen objects in both real world and simulation are available on our project website https://m2-t2.github.io.
TerrainNet: Visual Modeling of Complex Terrain for High-speed, Off-road Navigation
Meng, Xiangyun, Hatch, Nathan, Lambert, Alexander, Li, Anqi, Wagener, Nolan, Schmittle, Matthew, Lee, JoonHo, Yuan, Wentao, Chen, Zoey, Deng, Samuel, Okopal, Greg, Fox, Dieter, Boots, Byron, Shaban, Amirreza
Effective use of camera-based vision systems is essential for robust performance in autonomous off-road driving, particularly in the high-speed regime. Despite success in structured, on-road settings, current end-to-end approaches for scene prediction have yet to be successfully adapted for complex outdoor terrain. To this end, we present TerrainNet, a vision-based terrain perception system for semantic and geometric terrain prediction for aggressive, off-road navigation. The approach relies on several key insights and practical considerations for achieving reliable terrain modeling. The network includes a multi-headed output representation to capture fine- and coarse-grained terrain features necessary for estimating traversability. Accurate depth estimation is achieved using self-supervised depth completion with multi-view RGB and stereo inputs. Requirements for real-time performance and fast inference speeds are met using efficient, learned image feature projections. Furthermore, the model is trained on a large-scale, real-world off-road dataset collected across a variety of diverse outdoor environments. We show how TerrainNet can also be used for costmap prediction and provide a detailed framework for integration into a planning module. We demonstrate the performance of TerrainNet through extensive comparison to current state-of-the-art baselines for camera-only scene prediction. Finally, we showcase the effectiveness of integrating TerrainNet within a complete autonomous-driving stack by conducting a real-world vehicle test in a challenging off-road scenario.