Springenberg, Jost Tobias
Gemini Robotics: Bringing AI into the Physical World
Gemini Robotics Team, null, Abeyruwan, Saminda, Ainslie, Joshua, Alayrac, Jean-Baptiste, Arenas, Montserrat Gonzalez, Armstrong, Travis, Balakrishna, Ashwin, Baruch, Robert, Bauza, Maria, Blokzijl, Michiel, Bohez, Steven, Bousmalis, Konstantinos, Brohan, Anthony, Buschmann, Thomas, Byravan, Arunkumar, Cabi, Serkan, Caluwaerts, Ken, Casarini, Federico, Chang, Oscar, Chen, Jose Enrique, Chen, Xi, Chiang, Hao-Tien Lewis, Choromanski, Krzysztof, D'Ambrosio, David, Dasari, Sudeep, Davchev, Todor, Devin, Coline, Di Palo, Norman, Ding, Tianli, Dostmohamed, Adil, Driess, Danny, Du, Yilun, Dwibedi, Debidatta, Elabd, Michael, Fantacci, Claudio, Fong, Cody, Frey, Erik, Fu, Chuyuan, Giustina, Marissa, Gopalakrishnan, Keerthana, Graesser, Laura, Hasenclever, Leonard, Heess, Nicolas, Hernaez, Brandon, Herzog, Alexander, Hofer, R. Alex, Humplik, Jan, Iscen, Atil, Jacob, Mithun George, Jain, Deepali, Julian, Ryan, Kalashnikov, Dmitry, Karagozler, M. Emre, Karp, Stefani, Kew, Chase, Kirkland, Jerad, Kirmani, Sean, Kuang, Yuheng, Lampe, Thomas, Laurens, Antoine, Leal, Isabel, Lee, Alex X., Lee, Tsang-Wei Edward, Liang, Jacky, Lin, Yixin, Maddineni, Sharath, Majumdar, Anirudha, Michaely, Assaf Hurwitz, Moreno, Robert, Neunert, Michael, Nori, Francesco, Parada, Carolina, Parisotto, Emilio, Pastor, Peter, Pooley, Acorn, Rao, Kanishka, Reymann, Krista, Sadigh, Dorsa, Saliceti, Stefano, Sanketi, Pannag, Sermanet, Pierre, Shah, Dhruv, Sharma, Mohit, Shea, Kathryn, Shu, Charles, Sindhwani, Vikas, Singh, Sumeet, Soricut, Radu, Springenberg, Jost Tobias, Sterneck, Rachel, Surdulescu, Razvan, Tan, Jie, Tompson, Jonathan, Vanhoucke, Vincent, Varley, Jake, Vesom, Grace, Vezzani, Giulia, Vinyals, Oriol, Wahid, Ayzaan, Welker, Stefan, Wohlhart, Paul, Xia, Fei, Xiao, Ted, Xie, Annie, Xie, Jinyu, Xu, Peng, Xu, Sichun, Xu, Ying, Xu, Zhuo, Yang, Yuxiang, Yao, Rui, Yaroshenko, Sergey, Yu, Wenhao, Yuan, Wentao, Zhang, Jingwei, Zhang, Tingnan, Zhou, Allan, Zhou, Yuxiang
Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. This report introduces a new family of AI models purposefully designed for robotics and built upon the foundation of Gemini 2.0. We present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Gemini Robotics executes smooth and reactive movements to tackle a wide range of complex manipulation tasks while also being robust to variations in object types and positions, handling unseen environments as well as following diverse, open vocabulary instructions. We show that with additional fine-tuning, Gemini Robotics can be specialized to new capabilities including solving long-horizon, highly dexterous tasks, learning new short-horizon tasks from as few as 100 demonstrations and adapting to completely novel robot embodiments. This is made possible because Gemini Robotics builds on top of the Gemini Robotics-ER model, the second model we introduce in this work. Gemini Robotics-ER (Embodied Reasoning) extends Gemini's multimodal reasoning capabilities into the physical world, with enhanced spatial and temporal understanding. This enables capabilities relevant to robotics including object detection, pointing, trajectory and grasp prediction, as well as multi-view correspondence and 3D bounding box predictions. We show how this novel combination can support a variety of robotics applications. We also discuss and address important safety considerations related to this new class of robotics foundation models. The Gemini Robotics family marks a substantial step towards developing general-purpose robots that realizes AI's potential in the physical world.
Preference Optimization as Probabilistic Inference
Abdolmaleki, Abbas, Piot, Bilal, Shahriari, Bobak, Springenberg, Jost Tobias, Hertweck, Tim, Joshi, Rishabh, Oh, Junhyuk, Bloesch, Michael, Lampe, Thomas, Heess, Nicolas, Buchli, Jonas, Riedmiller, Martin
The use of preference annotated data for training machine learning models has a long history going back to early algorithms for recommender systems and market research (Bonilla et al., 2010; Boutilier, 2002; Guo and Sanner, 2010). These days preference optimization algorithms are receiving renewed attention since they are a natural candidate for shaping the outputs of deep learning systems, such as large language models (Ouyang et al., 2022; Team et al., 2024) or control policies, via human feedback (Azar et al., 2023; Christiano et al., 2017; Rafailov et al., 2023). Arguably, preference optimization algorithms can also be a natural choice even when direct human feedback is not available but one instead aims to optimize a machine learning model based on feedback from a hand-coded or learned critic function (judging desirability of solutions). Here preference optimization methods are useful since they let us optimize the model to achieve desired outcomes based on relative rankings between outcomes alone (rather than requiring absolute labels or carefully crafted reward functions). Among preference optimization approaches, those based on directly using preference data - as opposed to casting preference optimization as reinforcement learning from (human) feedback - such as DPO (Rafailov et al., 2023), have emerged as particularly successful since they only require access to an offline dataset of paired preference data, and are fairly robust to application domain and hyperparameter settings. However, algorithms within this class make specific assumptions tailored to their application domain. They were designed to optimize LLMs from human feedback in the form of comparisons of generated sentences and thus, by design, require paired preference data (since they directly model a specific choice of preference distribution). We are interested in finding algorithms that are more flexible, and applicable in settings where the assumptions underlying DPO do not apply.
Offline Actor-Critic Reinforcement Learning Scales to Large Models
Springenberg, Jost Tobias, Abdolmaleki, Abbas, Zhang, Jingwei, Groth, Oliver, Bloesch, Michael, Lampe, Thomas, Brakel, Philemon, Bechtle, Sarah, Kapturowski, Steven, Hafner, Roland, Heess, Nicolas, Riedmiller, Martin
We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning. We find that offline actor-critic algorithms can outperform strong, supervised, behavioral cloning baselines for multi-task training on a large dataset containing both sub-optimal and expert behavior on 132 continuous control tasks. We introduce a Perceiver-based actor-critic model and elucidate the key model features needed to make offline RL work with self- and cross-attention modules. Overall, we find that: i) simple offline actor critic algorithms are a natural choice for gradually moving away from the currently predominant paradigm of behavioral cloning, and ii) via offline RL it is possible to learn multi-task policies that master many domains simultaneously, including real robotics tasks, from sub-optimal demonstrations or self-generated data.
GATS: Gather-Attend-Scatter
Zolna, Konrad, Cabi, Serkan, Chen, Yutian, Lau, Eric, Fantacci, Claudio, Pasukonis, Jurgis, Springenberg, Jost Tobias, Colmenarejo, Sergio Gomez
As the AI community increasingly adopts large-scale models, it is crucial to develop general and flexible tools to integrate them. We introduce Gather-Attend-Scatter (GATS), a novel module that enables seamless combination of pretrained foundation models, both trainable and frozen, into larger multimodal networks. GATS empowers AI systems to process and generate information across multiple modalities at different rates. In contrast to traditional fine-tuning, GATS allows for the original component models to remain frozen, avoiding the risk of them losing important knowledge acquired during the pretraining phase. We demonstrate the utility and versatility of GATS with a few experiments across games, robotics, and multimodal input-output systems.
RoboCat: A Self-Improving Generalist Agent for Robotic Manipulation
Bousmalis, Konstantinos, Vezzani, Giulia, Rao, Dushyant, Devin, Coline, Lee, Alex X., Bauza, Maria, Davchev, Todor, Zhou, Yuxiang, Gupta, Agrim, Raju, Akhil, Laurens, Antoine, Fantacci, Claudio, Dalibard, Valentin, Zambelli, Martina, Martins, Murilo, Pevceviciute, Rugile, Blokzijl, Michiel, Denil, Misha, Batchelor, Nathan, Lampe, Thomas, Parisotto, Emilio, ลปoลna, Konrad, Reed, Scott, Colmenarejo, Sergio Gรณmez, Scholz, Jon, Abdolmaleki, Abbas, Groth, Oliver, Regli, Jean-Baptiste, Sushkov, Oleg, Rothรถrl, Tom, Chen, Josรฉ Enrique, Aytar, Yusuf, Barker, Dave, Ortiz, Joy, Riedmiller, Martin, Springenberg, Jost Tobias, Hadsell, Raia, Nori, Francesco, Heess, Nicolas
The ability to leverage heterogeneous robotic experience from different robots and tasks to quickly master novel skills and embodiments has the potential to transform robot learning. Inspired by recent advances in foundation models for vision and language, we propose a multi-embodiment, multi-task generalist agent for robotic manipulation. This agent, named RoboCat, is a visual goal-conditioned decision transformer capable of consuming action-labelled visual experience. This data spans a large repertoire of motor control skills from simulated and real robotic arms with varying sets of observations and actions. With RoboCat, we demonstrate the ability to generalise to new tasks and robots, both zero-shot as well as through adaptation using only 100-1000 examples for the target task. We also show how a trained model itself can be used to generate data for subsequent training iterations, thus providing a basic building block for an autonomous improvement loop. We investigate the agent's capabilities, with large-scale evaluations both in simulation and on three different real robot embodiments. We find that as we grow and diversify its training data, RoboCat not only shows signs of cross-task transfer, but also becomes more efficient at adapting to new tasks.
Mastering Stacking of Diverse Shapes with Large-Scale Iterative Reinforcement Learning on Real Robots
Lampe, Thomas, Abdolmaleki, Abbas, Bechtle, Sarah, Huang, Sandy H., Springenberg, Jost Tobias, Bloesch, Michael, Groth, Oliver, Hafner, Roland, Hertweck, Tim, Neunert, Michael, Wulfmeier, Markus, Zhang, Jingwei, Nori, Francesco, Heess, Nicolas, Riedmiller, Martin
Reinforcement learning solely from an agent's self-generated data is often believed to be infeasible for learning on real robots, due to the amount of data needed. However, if done right, agents learning from real data can be surprisingly efficient through re-using previously collected sub-optimal data. In this paper we demonstrate how the increased understanding of off-policy learning methods and their embedding in an iterative online/offline scheme (``collect and infer'') can drastically improve data-efficiency by using all the collected experience, which empowers learning from real robot experience only. Moreover, the resulting policy improves significantly over the state of the art on a recently proposed real robot manipulation benchmark. Our approach learns end-to-end, directly from pixels, and does not rely on additional human domain knowledge such as a simulator or demonstrations.
A Generalist Dynamics Model for Control
Schubert, Ingmar, Zhang, Jingwei, Bruce, Jake, Bechtle, Sarah, Parisotto, Emilio, Riedmiller, Martin, Springenberg, Jost Tobias, Byravan, Arunkumar, Hasenclever, Leonard, Heess, Nicolas
Figure 1 | Schematic overview of the data regimes for which we show experimental results. These regimes are characterized by how much data from the target environment is available to the agent, and how much (potentially generalizable) experience has been collected in other environments. The experiments both demonstrate that TDMs are capable single-environment models (marked purple) and generalize across environments (marked yellow). If sufficient data from the target environment is available, we can learn a single-environment specialist model (section 5.1). If there are only small amounts of data from the target environment, but more data from other environments, a generalist model can be pre-trained and then fine-tuned on the target environment (section 5.2.1). Finally, if we are able to train a generalist model on large amounts of data from different environments, we can zero-shot apply this model to our target environment without fine-tuning (section 5.2.2). We also show an example for unsuccessful generalization (no color) in section E.
On Multi-objective Policy Optimization as a Tool for Reinforcement Learning: Case Studies in Offline RL and Finetuning
Abdolmaleki, Abbas, Huang, Sandy H., Vezzani, Giulia, Shahriari, Bobak, Springenberg, Jost Tobias, Mishra, Shruti, TB, Dhruva, Byravan, Arunkumar, Bousmalis, Konstantinos, Gyorgy, Andras, Szepesvari, Csaba, Hadsell, Raia, Heess, Nicolas, Riedmiller, Martin
Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives or constraints in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors. Often, the task reward and auxiliary objectives are in conflict, and in this paper we argue that this makes it natural to treat these cases as instances of multi-objective (MO) optimization problems. We demonstrate how this perspective allows us to develop novel and more effective RL algorithms. In particular, we focus on offline RL and finetuning as case studies, and show that existing approaches can be understood as MO algorithms relying on linear scalarization. We hypothesize that replacing linear scalarization with a better algorithm can improve performance. We introduce Distillation of a Mixture of Experts (DiME), a new MORL algorithm that outperforms linear scalarization and can be applied to these non-standard MO problems. We demonstrate that for offline RL, DiME leads to a simple new algorithm that outperforms state-of-the-art. For finetuning, we derive new algorithms that learn to outperform the teacher policy. Deep reinforcement learning (RL) algorithms have solved a number of challenging problems, including in games (Mnih et al., 2015; Silver et al., 2016), simulated continuous control (Heess et al., 2017; Peng et al., 2018), and robotics (OpenAI et al., 2018). The standard RL setting appeals through its simplicity: an agent acts in the environment and can discover complex solutions simply by maximizing cumulative discounted reward. In practice, however, the situation is often more complicated. For instance, without a carefully crafted reward function or sophisticated exploration strategy, learning may require hundreds of millions of environment interactions, or may not be possible at all. A number of strategies have been developed to mitigate the shortcomings of the pure RL paradigm. These include strategies that regularize the final solution, for instance by maximizing auxiliary rewards (Jaderberg et al., 2017) or the entropy of the policy (Mnih et al., 2016; Haarnoja et al., 2018).
Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Zhang, Jingwei, Springenberg, Jost Tobias, Byravan, Arunkumar, Hasenclever, Leonard, Abdolmaleki, Abbas, Rao, Dushyant, Heess, Nicolas, Riedmiller, Martin
From daily interactions with the world, humans gradually develop an internal understanding of which series of events would be triggered when a certain sequence of actions is taken (Hogendoorn and Burkitt, 2018; Maus et al., 2013; Nortmann et al., 2015). This mental model of the world can serve as a compact proxy of our previous experiences and help us plan out routes to desired goals before taking action (Ha and Schmidhuber, 2018). Studies have further implied that these mental predictive models might not be restricted to the level of primitive actions (Botvinick, 2008; Consul et al., 2022), but rather consider predictions over larger timescales that abstract away detailed behavior consequences, which can enable efficient long-horizon planning to guide our daily decision making. When developing intelligent artificial agents it is therefore natural to imagine a similar process being useful for learning and transferring abstract models of the world across streams of experiences and tasks. We expect such a temporally abstract model of actions and dynamics to be significantly more useful than a simple one-step prediction model (together with primitive policies) when transferring them to a target task. This is because they should allow us to rapidly plan over long trajectories (to find some states with high rewards) while alleviating the common problem of error accumulation that occurs when chaining one-step prediction models which limits the effective planning horizon in most existing methods, e.g.
Evaluating model-based planning and planner amortization for continuous control
Byravan, Arunkumar, Hasenclever, Leonard, Trochim, Piotr, Mirza, Mehdi, Ialongo, Alessandro Davide, Tassa, Yuval, Springenberg, Jost Tobias, Abdolmaleki, Abbas, Heess, Nicolas, Merel, Josh, Riedmiller, Martin
There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC. We find that well-tuned model-free agents are strong baselines even for high DoF control problems but MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency in hard multi-task/multi-goal settings. Finally, we show that it is possible to distil a model-based planner into a policy that amortizes the planning computation without any loss of performance. Videos of agents performing different tasks can be seen at https://sites.google.com/view/mbrl-amortization/home.