Sermanet, Pierre
Gemini Robotics: Bringing AI into the Physical World
Gemini Robotics Team, null, Abeyruwan, Saminda, Ainslie, Joshua, Alayrac, Jean-Baptiste, Arenas, Montserrat Gonzalez, Armstrong, Travis, Balakrishna, Ashwin, Baruch, Robert, Bauza, Maria, Blokzijl, Michiel, Bohez, Steven, Bousmalis, Konstantinos, Brohan, Anthony, Buschmann, Thomas, Byravan, Arunkumar, Cabi, Serkan, Caluwaerts, Ken, Casarini, Federico, Chang, Oscar, Chen, Jose Enrique, Chen, Xi, Chiang, Hao-Tien Lewis, Choromanski, Krzysztof, D'Ambrosio, David, Dasari, Sudeep, Davchev, Todor, Devin, Coline, Di Palo, Norman, Ding, Tianli, Dostmohamed, Adil, Driess, Danny, Du, Yilun, Dwibedi, Debidatta, Elabd, Michael, Fantacci, Claudio, Fong, Cody, Frey, Erik, Fu, Chuyuan, Giustina, Marissa, Gopalakrishnan, Keerthana, Graesser, Laura, Hasenclever, Leonard, Heess, Nicolas, Hernaez, Brandon, Herzog, Alexander, Hofer, R. Alex, Humplik, Jan, Iscen, Atil, Jacob, Mithun George, Jain, Deepali, Julian, Ryan, Kalashnikov, Dmitry, Karagozler, M. Emre, Karp, Stefani, Kew, Chase, Kirkland, Jerad, Kirmani, Sean, Kuang, Yuheng, Lampe, Thomas, Laurens, Antoine, Leal, Isabel, Lee, Alex X., Lee, Tsang-Wei Edward, Liang, Jacky, Lin, Yixin, Maddineni, Sharath, Majumdar, Anirudha, Michaely, Assaf Hurwitz, Moreno, Robert, Neunert, Michael, Nori, Francesco, Parada, Carolina, Parisotto, Emilio, Pastor, Peter, Pooley, Acorn, Rao, Kanishka, Reymann, Krista, Sadigh, Dorsa, Saliceti, Stefano, Sanketi, Pannag, Sermanet, Pierre, Shah, Dhruv, Sharma, Mohit, Shea, Kathryn, Shu, Charles, Sindhwani, Vikas, Singh, Sumeet, Soricut, Radu, Springenberg, Jost Tobias, Sterneck, Rachel, Surdulescu, Razvan, Tan, Jie, Tompson, Jonathan, Vanhoucke, Vincent, Varley, Jake, Vesom, Grace, Vezzani, Giulia, Vinyals, Oriol, Wahid, Ayzaan, Welker, Stefan, Wohlhart, Paul, Xia, Fei, Xiao, Ted, Xie, Annie, Xie, Jinyu, Xu, Peng, Xu, Sichun, Xu, Ying, Xu, Zhuo, Yang, Yuxiang, Yao, Rui, Yaroshenko, Sergey, Yu, Wenhao, Yuan, Wentao, Zhang, Jingwei, Zhang, Tingnan, Zhou, Allan, Zhou, Yuxiang
Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. This report introduces a new family of AI models purposefully designed for robotics and built upon the foundation of Gemini 2.0. We present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Gemini Robotics executes smooth and reactive movements to tackle a wide range of complex manipulation tasks while also being robust to variations in object types and positions, handling unseen environments as well as following diverse, open vocabulary instructions. We show that with additional fine-tuning, Gemini Robotics can be specialized to new capabilities including solving long-horizon, highly dexterous tasks, learning new short-horizon tasks from as few as 100 demonstrations and adapting to completely novel robot embodiments. This is made possible because Gemini Robotics builds on top of the Gemini Robotics-ER model, the second model we introduce in this work. Gemini Robotics-ER (Embodied Reasoning) extends Gemini's multimodal reasoning capabilities into the physical world, with enhanced spatial and temporal understanding. This enables capabilities relevant to robotics including object detection, pointing, trajectory and grasp prediction, as well as multi-view correspondence and 3D bounding box predictions. We show how this novel combination can support a variety of robotics applications. We also discuss and address important safety considerations related to this new class of robotics foundation models. The Gemini Robotics family marks a substantial step towards developing general-purpose robots that realizes AI's potential in the physical world.
SciFi-Benchmark: How Would AI-Powered Robots Behave in Science Fiction Literature?
Sermanet, Pierre, Majumdar, Anirudha, Sindhwani, Vikas
Given the recent rate of progress in artificial intelligence (AI) and robotics, a tantalizing question is emerging: would robots controlled by emerging AI systems be strongly aligned with human values? In this work, we propose a scalable way to probe this question by generating a benchmark spanning the key moments in 824 major pieces of science fiction literature (movies, tv, novels and scientific books) where an agent (AI or robot) made critical decisions (good or bad). We use a LLM's recollection of each key moment to generate questions in similar situations, the decisions made by the agent, and alternative decisions it could have made (good or bad). We then measure an approximation of how well models align with human values on a set of human-voted answers. We also generate rules that can be automatically improved via amendment process in order to generate the first Sci-Fi inspired constitutions for promoting ethical behavior in AIs and robots in the real world. Our first finding is that modern LLMs paired with constitutions turn out to be well-aligned with human values (95.8%), contrary to unsettling decisions typically made in SciFi (only 21.2% alignment). Secondly, we find that generated constitutions substantially increase alignment compared to the base model (79.4% to 95.8%), and show resilience to an adversarial prompt setting (23.3% to 92.3%). Additionally, we find that those constitutions are among the top performers on the ASIMOV Benchmark which is derived from real-world images and hospital injury reports. Sci-Fi-inspired constitutions are thus highly aligned and applicable in real-world situations. We release SciFi-Benchmark: a large-scale dataset to advance robot ethics and safety research. It comprises 9,056 questions and 53,384 answers, in addition to a smaller human-labeled evaluation set. Data is available at https://scifi-benchmark.github.io
Generating Robot Constitutions & Benchmarks for Semantic Safety
Sermanet, Pierre, Majumdar, Anirudha, Irpan, Alex, Kalashnikov, Dmitry, Sindhwani, Vikas
Until recently, robotics safety research was predominantly about collision avoidance and hazard reduction in the immediate vicinity of a robot. Since the advent of large vision and language models (VLMs), robots are now also capable of higher-level semantic scene understanding and natural language interactions with humans. Despite their known vulnerabilities (e.g. hallucinations or jail-breaking), VLMs are being handed control of robots capable of physical contact with the real world. This can lead to dangerous behaviors, making semantic safety for robots a matter of immediate concern. Our contributions in this paper are two fold: first, to address these emerging risks, we release the ASIMOV Benchmark, a large-scale and comprehensive collection of datasets for evaluating and improving semantic safety of foundation models serving as robot brains. Our data generation recipe is highly scalable: by leveraging text and image generation techniques, we generate undesirable situations from real-world visual scenes and human injury reports from hospitals. Secondly, we develop a framework to automatically generate robot constitutions from real-world data to steer a robot's behavior using Constitutional AI mechanisms. We propose a novel auto-amending process that is able to introduce nuances in written rules of behavior; this can lead to increased alignment with human preferences on behavior desirability and safety. We explore trade-offs between generality and specificity across a diverse set of constitutions of different lengths, and demonstrate that a robot is able to effectively reject unconstitutional actions. We measure a top alignment rate of 84.3% on the ASIMOV Benchmark using generated constitutions, outperforming no-constitution baselines and human-written constitutions. Data is available at asimov-benchmark.github.io
Predictive Red Teaming: Breaking Policies Without Breaking Robots
Majumdar, Anirudha, Sharma, Mohit, Kalashnikov, Dmitry, Singh, Sumeet, Sermanet, Pierre, Sindhwani, Vikas
Is it possible to expose the vulnerabilities of a given robot policy with respect to changes in environmental factors such as lighting, visual distractors, and object placement without performing hardware evaluations in these scenarios? As we seek to deploy robots in environments with ever-increasing complexity, it becomes imperative to develop scalable methods for predicting how well they will generalize when faced with unseen scenarios. Performing hardware evaluations to discover vulnerabilities -- which can depend in surprising ways on the specifics of policy training and architecture -- is often prohibitively expensive to set up and execute, especially when the goal is to test the limits of safe deployment in a sufficiently diverse set of scenarios. As an example, consider a visuomotor diffusion policy [1] trained to perform pick-and-place tasks via behavior cloning (Figure 1). The policy is trained with a large dataset: over 3K+ demonstrations with varied objects, locations, and visual distractors. Will the policy generalize well to a change in the height of the table by a few centimeters (as one may plausibly predict due to the variations in 2D object locations in the training dataset) compared to when a human is standing closer to the table than seen during training? If so, what is the absolute degradation of the success rate in each case? As it turns out, the above prediction is incorrect: the success rate of the policy degrades from 65% under nominal conditions to 10% by changing the table height, and remains roughly constant with a human close to the table. Predicting the relative and absolute impact of other factors (e.g., lighting, table backgrounds, object distractors; Figure 1) can be even more challenging.
A Short Note on Evaluating RepNet for Temporal Repetition Counting in Videos
Dwibedi, Debidatta, Aytar, Yusuf, Tompson, Jonathan, Sermanet, Pierre, Zisserman, Andrew
We discuss some consistent issues on how RepNet has been evaluated in various papers. As a way to mitigate these issues, we report RepNet performance results on different datasets, and release evaluation code and the RepNet checkpoint to obtain these results. Code URL: https://github.com/google-research/google-research/blob/master/repnet/
RT-H: Action Hierarchies Using Language
Belkhale, Suneel, Ding, Tianli, Xiao, Ted, Sermanet, Pierre, Vuong, Quon, Tompson, Jonathan, Chebotar, Yevgen, Dwibedi, Debidatta, Sadigh, Dorsa
Language provides a way to break down complex concepts into digestible pieces. Recent works in robot imitation learning use language-conditioned policies that predict actions given visual observations and the high-level task specified in language. These methods leverage the structure of natural language to share data between semantically similar tasks (e.g., "pick coke can" and "pick an apple") in multi-task datasets. However, as tasks become more semantically diverse (e.g., "pick coke can" and "pour cup"), sharing data between tasks becomes harder, so learning to map high-level tasks to actions requires much more demonstration data. To bridge tasks and actions, our insight is to teach the robot the language of actions, describing low-level motions with more fine-grained phrases like "move arm forward". Predicting these language motions as an intermediate step between tasks and actions forces the policy to learn the shared structure of low-level motions across seemingly disparate tasks. Furthermore, a policy that is conditioned on language motions can easily be corrected during execution through human-specified language motions. This enables a new paradigm for flexible policies that can learn from human intervention in language. Our method RT-H builds an action hierarchy using language motions: it first learns to predict language motions, and conditioned on this and the high-level task, it predicts actions, using visual context at all stages. We show that RT-H leverages this language-action hierarchy to learn policies that are more robust and flexible by effectively tapping into multi-task datasets. We show that these policies not only allow for responding to language interventions, but can also learn from such interventions and outperform methods that learn from teleoperated interventions. Our website and videos are found at https://rt-hierarchy.github.io.
Vid2Robot: End-to-end Video-conditioned Policy Learning with Cross-Attention Transformers
Jain, Vidhi, Attarian, Maria, Joshi, Nikhil J, Wahid, Ayzaan, Driess, Danny, Vuong, Quan, Sanketi, Pannag R, Sermanet, Pierre, Welker, Stefan, Chan, Christine, Gilitschenski, Igor, Bisk, Yonatan, Dwibedi, Debidatta
While large-scale robotic systems typically rely on textual instructions for tasks, this work explores a different approach: can robots infer the task directly from observing humans? This shift necessitates the robot's ability to decode human intent and translate it into executable actions within its physical constraints and environment. We introduce Vid2Robot, a novel end-to-end video-based learning framework for robots. Given a video demonstration of a manipulation task and current visual observations, Vid2Robot directly produces robot actions. This is achieved through a unified representation model trained on a large dataset of human video and robot trajectory. The model leverages cross-attention mechanisms to fuse prompt video features to the robot's current state and generate appropriate actions that mimic the observed task. To further improve policy performance, we propose auxiliary contrastive losses that enhance the alignment between human and robot video representations. We evaluate Vid2Robot on real-world robots, demonstrating a 20% improvement in performance compared to other video-conditioned policies when using human demonstration videos. Additionally, our model exhibits emergent capabilities, such as successfully transferring observed motions from one object to another, and long-horizon composition, thus showcasing its potential for real-world applications. Project website: vid2robot.github.io
AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents
Ahn, Michael, Dwibedi, Debidatta, Finn, Chelsea, Arenas, Montse Gonzalez, Gopalakrishnan, Keerthana, Hausman, Karol, Ichter, Brian, Irpan, Alex, Joshi, Nikhil, Julian, Ryan, Kirmani, Sean, Leal, Isabel, Lee, Edward, Levine, Sergey, Lu, Yao, Leal, Isabel, Maddineni, Sharath, Rao, Kanishka, Sadigh, Dorsa, Sanketi, Pannag, Sermanet, Pierre, Vuong, Quan, Welker, Stefan, Xia, Fei, Xiao, Ted, Xu, Peng, Xu, Steve, Xu, Zhuo
Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world. In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots. Guiding data collection by tapping into the knowledge of foundation models enables AutoRT to effectively reason about autonomy tradeoffs and safety while significantly scaling up data collection for robot learning. We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies. We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Collaboration, Open X-Embodiment, Padalkar, Abhishek, Pooley, Acorn, Mandlekar, Ajay, Jain, Ajinkya, Tung, Albert, Bewley, Alex, Herzog, Alex, Irpan, Alex, Khazatsky, Alexander, Rai, Anant, Singh, Anikait, Garg, Animesh, Brohan, Anthony, Raffin, Antonin, Wahid, Ayzaan, Burgess-Limerick, Ben, Kim, Beomjoon, Schölkopf, Bernhard, Ichter, Brian, Lu, Cewu, Xu, Charles, Finn, Chelsea, Xu, Chenfeng, Chi, Cheng, Huang, Chenguang, Chan, Christine, Pan, Chuer, Fu, Chuyuan, Devin, Coline, Driess, Danny, Pathak, Deepak, Shah, Dhruv, Büchler, Dieter, Kalashnikov, Dmitry, Sadigh, Dorsa, Johns, Edward, Ceola, Federico, Xia, Fei, Stulp, Freek, Zhou, Gaoyue, Sukhatme, Gaurav S., Salhotra, Gautam, Yan, Ge, Schiavi, Giulio, Kahn, Gregory, Su, Hao, Fang, Hao-Shu, Shi, Haochen, Amor, Heni Ben, Christensen, Henrik I, Furuta, Hiroki, Walke, Homer, Fang, Hongjie, Mordatch, Igor, Radosavovic, Ilija, Leal, Isabel, Liang, Jacky, Abou-Chakra, Jad, Kim, Jaehyung, Peters, Jan, Schneider, Jan, Hsu, Jasmine, Bohg, Jeannette, Bingham, Jeffrey, Wu, Jiajun, Wu, Jialin, Luo, Jianlan, Gu, Jiayuan, Tan, Jie, Oh, Jihoon, Malik, Jitendra, Booher, Jonathan, Tompson, Jonathan, Yang, Jonathan, Lim, Joseph J., Silvério, João, Han, Junhyek, Rao, Kanishka, Pertsch, Karl, Hausman, Karol, Go, Keegan, Gopalakrishnan, Keerthana, Goldberg, Ken, Byrne, Kendra, Oslund, Kenneth, Kawaharazuka, Kento, Zhang, Kevin, Rana, Krishan, Srinivasan, Krishnan, Chen, Lawrence Yunliang, Pinto, Lerrel, Fei-Fei, Li, Tan, Liam, Ott, Lionel, Lee, Lisa, Tomizuka, Masayoshi, Spero, Max, Du, Maximilian, Ahn, Michael, Zhang, Mingtong, Ding, Mingyu, Srirama, Mohan Kumar, Sharma, Mohit, Kim, Moo Jin, Kanazawa, Naoaki, Hansen, Nicklas, Heess, Nicolas, Joshi, Nikhil J, Suenderhauf, Niko, Di Palo, Norman, Shafiullah, Nur Muhammad Mahi, Mees, Oier, Kroemer, Oliver, Sanketi, Pannag R, Wohlhart, Paul, Xu, Peng, Sermanet, Pierre, Sundaresan, Priya, Vuong, Quan, Rafailov, Rafael, Tian, Ran, Doshi, Ria, Martín-Martín, Roberto, Mendonca, Russell, Shah, Rutav, Hoque, Ryan, Julian, Ryan, Bustamante, Samuel, Kirmani, Sean, Levine, Sergey, Moore, Sherry, Bahl, Shikhar, Dass, Shivin, Sonawani, Shubham, Song, Shuran, Xu, Sichun, Haldar, Siddhant, Adebola, Simeon, Guist, Simon, Nasiriany, Soroush, Schaal, Stefan, Welker, Stefan, Tian, Stephen, Dasari, Sudeep, Belkhale, Suneel, Osa, Takayuki, Harada, Tatsuya, Matsushima, Tatsuya, Xiao, Ted, Yu, Tianhe, Ding, Tianli, Davchev, Todor, Zhao, Tony Z., Armstrong, Travis, Darrell, Trevor, Jain, Vidhi, Vanhoucke, Vincent, Zhan, Wei, Zhou, Wenxuan, Burgard, Wolfram, Chen, Xi, Wang, Xiaolong, Zhu, Xinghao, Li, Xuanlin, Lu, Yao, Chebotar, Yevgen, Zhou, Yifan, Zhu, Yifeng, Xu, Ying, Wang, Yixuan, Bisk, Yonatan, Cho, Yoonyoung, Lee, Youngwoon, Cui, Yuchen, Wu, Yueh-Hua, Tang, Yujin, Zhu, Yuke, Li, Yunzhu, Iwasawa, Yusuke, Matsuo, Yutaka, Xu, Zhuo, Cui, Zichen Jeff
Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website $\href{https://robotics-transformer-x.github.io}{\text{robotics-transformer-x.github.io}}$.
RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Sermanet, Pierre, Ding, Tianli, Zhao, Jeffrey, Xia, Fei, Dwibedi, Debidatta, Gopalakrishnan, Keerthana, Chan, Christine, Dulac-Arnold, Gabriel, Maddineni, Sharath, Joshi, Nikhil J, Florence, Pete, Han, Wei, Baruch, Robert, Lu, Yao, Mirchandani, Suvir, Xu, Peng, Sanketi, Pannag, Hausman, Karol, Shafran, Izhak, Ichter, Brian, Cao, Yuan
We present a scalable, bottom-up and intrinsically diverse data collection scheme that can be used for high-level reasoning with long and medium horizons and that has 2.2x higher throughput compared to traditional narrow top-down step-by-step collection. We collect realistic data by performing any user requests within the entirety of 3 office buildings and using multiple robot and human embodiments. With this data, we show that models trained on all embodiments perform better than ones trained on the robot data only, even when evaluated solely on robot episodes. We find that for a fixed collection budget it is beneficial to take advantage of cheaper human collection along with robot collection. We release a large and highly diverse (29,520 unique instructions) dataset dubbed RoboVQA containing 829,502 (video, text) pairs for robotics-focused visual question answering. We also demonstrate how evaluating real robot experiments with an intervention mechanism enables performing tasks to completion, making it deployable with human oversight even if imperfect while also providing a single performance metric. We demonstrate a single video-conditioned model named RoboVQA-VideoCoCa trained on our dataset that is capable of performing a variety of grounded high-level reasoning tasks in broad realistic settings with a cognitive intervention rate 46% lower than the zero-shot state of the art visual language model (VLM) baseline and is able to guide real robots through long-horizon tasks. The performance gap with zero-shot state-of-the-art models indicates that a lot of grounded data remains to be collected for real-world deployment, emphasizing the critical need for scalable data collection approaches. Finally, we show that video VLMs significantly outperform single-image VLMs with an average error rate reduction of 19% across all VQA tasks. Data and videos available at https://robovqa.github.io