Byravan, Arunkumar
Gemini Robotics: Bringing AI into the Physical World
Gemini Robotics Team, null, Abeyruwan, Saminda, Ainslie, Joshua, Alayrac, Jean-Baptiste, Arenas, Montserrat Gonzalez, Armstrong, Travis, Balakrishna, Ashwin, Baruch, Robert, Bauza, Maria, Blokzijl, Michiel, Bohez, Steven, Bousmalis, Konstantinos, Brohan, Anthony, Buschmann, Thomas, Byravan, Arunkumar, Cabi, Serkan, Caluwaerts, Ken, Casarini, Federico, Chang, Oscar, Chen, Jose Enrique, Chen, Xi, Chiang, Hao-Tien Lewis, Choromanski, Krzysztof, D'Ambrosio, David, Dasari, Sudeep, Davchev, Todor, Devin, Coline, Di Palo, Norman, Ding, Tianli, Dostmohamed, Adil, Driess, Danny, Du, Yilun, Dwibedi, Debidatta, Elabd, Michael, Fantacci, Claudio, Fong, Cody, Frey, Erik, Fu, Chuyuan, Giustina, Marissa, Gopalakrishnan, Keerthana, Graesser, Laura, Hasenclever, Leonard, Heess, Nicolas, Hernaez, Brandon, Herzog, Alexander, Hofer, R. Alex, Humplik, Jan, Iscen, Atil, Jacob, Mithun George, Jain, Deepali, Julian, Ryan, Kalashnikov, Dmitry, Karagozler, M. Emre, Karp, Stefani, Kew, Chase, Kirkland, Jerad, Kirmani, Sean, Kuang, Yuheng, Lampe, Thomas, Laurens, Antoine, Leal, Isabel, Lee, Alex X., Lee, Tsang-Wei Edward, Liang, Jacky, Lin, Yixin, Maddineni, Sharath, Majumdar, Anirudha, Michaely, Assaf Hurwitz, Moreno, Robert, Neunert, Michael, Nori, Francesco, Parada, Carolina, Parisotto, Emilio, Pastor, Peter, Pooley, Acorn, Rao, Kanishka, Reymann, Krista, Sadigh, Dorsa, Saliceti, Stefano, Sanketi, Pannag, Sermanet, Pierre, Shah, Dhruv, Sharma, Mohit, Shea, Kathryn, Shu, Charles, Sindhwani, Vikas, Singh, Sumeet, Soricut, Radu, Springenberg, Jost Tobias, Sterneck, Rachel, Surdulescu, Razvan, Tan, Jie, Tompson, Jonathan, Vanhoucke, Vincent, Varley, Jake, Vesom, Grace, Vezzani, Giulia, Vinyals, Oriol, Wahid, Ayzaan, Welker, Stefan, Wohlhart, Paul, Xia, Fei, Xiao, Ted, Xie, Annie, Xie, Jinyu, Xu, Peng, Xu, Sichun, Xu, Ying, Xu, Zhuo, Yang, Yuxiang, Yao, Rui, Yaroshenko, Sergey, Yu, Wenhao, Yuan, Wentao, Zhang, Jingwei, Zhang, Tingnan, Zhou, Allan, Zhou, Yuxiang
Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. This report introduces a new family of AI models purposefully designed for robotics and built upon the foundation of Gemini 2.0. We present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Gemini Robotics executes smooth and reactive movements to tackle a wide range of complex manipulation tasks while also being robust to variations in object types and positions, handling unseen environments as well as following diverse, open vocabulary instructions. We show that with additional fine-tuning, Gemini Robotics can be specialized to new capabilities including solving long-horizon, highly dexterous tasks, learning new short-horizon tasks from as few as 100 demonstrations and adapting to completely novel robot embodiments. This is made possible because Gemini Robotics builds on top of the Gemini Robotics-ER model, the second model we introduce in this work. Gemini Robotics-ER (Embodied Reasoning) extends Gemini's multimodal reasoning capabilities into the physical world, with enhanced spatial and temporal understanding. This enables capabilities relevant to robotics including object detection, pointing, trajectory and grasp prediction, as well as multi-view correspondence and 3D bounding box predictions. We show how this novel combination can support a variety of robotics applications. We also discuss and address important safety considerations related to this new class of robotics foundation models. The Gemini Robotics family marks a substantial step towards developing general-purpose robots that realizes AI's potential in the physical world.
Proc4Gem: Foundation models for physical agency through procedural generation
Lin, Yixin, Humplik, Jan, Huang, Sandy H., Hasenclever, Leonard, Romano, Francesco, Saliceti, Stefano, Zheng, Daniel, Chen, Jose Enrique, Barros, Catarina, Collister, Adrian, Young, Matt, Dostmohamed, Adil, Moran, Ben, Caluwaerts, Ken, Giustina, Marissa, Moore, Joss, Connell, Kieran, Nori, Francesco, Heess, Nicolas, Bohez, Steven, Byravan, Arunkumar
In robot learning, it is common to either ignore the environment semantics, focusing on tasks like whole-body control which only require reasoning about robot-environment contacts, or conversely to ignore contact dynamics, focusing on grounding high-level movement in vision and language. In this work, we show that advances in generative modeling, photorealistic rendering, and procedural generation allow us to tackle tasks requiring both. By generating contact-rich trajectories with accurate physics in semantically-diverse simulations, we can distill behaviors into large multimodal models that directly transfer to the real world: a system we call Proc4Gem. Specifically, we show that a foundation model, Gemini, fine-tuned on only simulation data, can be instructed in language to control a quadruped robot to push an object with its body to unseen targets in unseen real-world environments. Our real-world results demonstrate the promise of using simulation to imbue foundation models with physical agency. Videos can be found at our website: https://sites.google.com/view/proc4gem
Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
Schenck, Connor, Reid, Isaac, Jacob, Mithun George, Bewley, Alex, Ainslie, Joshua, Rendleman, David, Jain, Deepali, Sharma, Mohit, Dubey, Avinava, Wahid, Ayzaan, Singh, Sumeet, Wagner, Renรฉ, Ding, Tianli, Fu, Chuyuan, Byravan, Arunkumar, Varley, Jake, Gritsenko, Alexey, Minderer, Matthias, Kalashnikov, Dmitry, Tompson, Jonathan, Sindhwani, Vikas, Choromanski, Krzysztof
We introduce STRING: Separable Translationally Invariant Position Encodings. STRING extends Rotary Position Encodings, a recently proposed and widely used algorithm in large language models, via a unifying theoretical framework. Importantly, STRING still provides exact translation invariance, including token coordinates of arbitrary dimensionality, whilst maintaining a low computational footprint. These properties are especially important in robotics, where efficient 3D token representation is key. We integrate STRING into Vision Transformers with RGB(-D) inputs (color plus optional depth), showing substantial gains, e.g. in open-vocabulary object detection and for robotics controllers. We complement our experiments with a rigorous mathematical analysis, proving the universality of our methods.
Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning
Tirumala, Dhruva, Wulfmeier, Markus, Moran, Ben, Huang, Sandy, Humplik, Jan, Lever, Guy, Haarnoja, Tuomas, Hasenclever, Leonard, Byravan, Arunkumar, Batchelor, Nathan, Sreendra, Neil, Patel, Kushal, Gwira, Marlon, Nori, Francesco, Riedmiller, Martin, Heess, Nicolas
We apply multi-agent deep reinforcement learning (RL) to train end-to-end robot soccer policies with fully onboard computation and sensing via egocentric RGB vision. This setting reflects many challenges of real-world robotics, including active perception, agile full-body control, and long-horizon planning in a dynamic, partially-observable, multi-agent domain. We rely on large-scale, simulation-based data generation to obtain complex behaviors from egocentric vision which can be successfully transferred to physical robots using low-cost sensors. To achieve adequate visual realism, our simulation combines rigid-body physics with learned, realistic rendering via multiple Neural Radiance Fields (NeRFs). We combine teacher-based multi-agent RL and cross-experiment data reuse to enable the discovery of sophisticated soccer strategies. We analyze active-perception behaviors including object tracking and ball seeking that emerge when simply optimizing perception-agnostic soccer play. The agents display equivalent levels of performance and agility as policies with access to privileged, ground-truth state. To our knowledge, this paper constitutes a first demonstration of end-to-end training for multi-agent robot soccer, mapping raw pixel observations to joint-level actions, that can be deployed in the real world. Videos of the game-play and analyses can be seen on our website https://sites.google.com/view/vision-soccer .
Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning
Bhardwaj, Mohak, Lampe, Thomas, Neunert, Michael, Romano, Francesco, Abdolmaleki, Abbas, Byravan, Arunkumar, Wulfmeier, Markus, Riedmiller, Martin, Buchli, Jonas
Recent advances in real-world applications of reinforcement learning (RL) have relied on the ability to accurately simulate systems at scale. However, domains such as fluid dynamical systems exhibit complex dynamic phenomena that are hard to simulate at high integration rates, limiting the direct application of modern deep RL algorithms to often expensive or safety critical hardware. In this work, we introduce "Box o Flows", a novel benchtop experimental control system for systematically evaluating RL algorithms in dynamic real-world scenarios. We describe the key components of the Box o Flows, and through a series of experiments demonstrate how state-of-the-art model-free RL algorithms can synthesize a variety of complex behaviors via simple reward specifications. Furthermore, we explore the role of offline RL in data-efficient hypothesis testing by reusing past experiences. We believe that the insights gained from this preliminary study and the availability of systems like the Box o Flows support the way forward for developing systematic RL algorithms that can be generally applied to complex, dynamical systems. Supplementary material and videos of experiments are available at https://sites.google.com/view/box-o-flows/home.
Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities
Wulfmeier, Markus, Byravan, Arunkumar, Bechtle, Sarah, Hausman, Karol, Heess, Nicolas
Contemporary artificial intelligence systems exhibit rapidly growing abilities accompanied by the growth of required resources, expansive datasets and corresponding investments into computing infrastructure. Although earlier successes predominantly focus on constrained settings, recent strides in fundamental research and applications aspire to create increasingly general systems. This evolving landscape presents a dual panorama of opportunities and challenges in refining the generalisation and transfer of knowledge - the extraction from existing sources and adaptation as a comprehensive foundation for tackling new problems. Within the domain of reinforcement learning (RL), the representation of knowledge manifests through various modalities, including dynamics and reward models, value functions, policies, and the original data. This taxonomy systematically targets these modalities and frames its discussion based on their inherent properties and alignment with different objectives and mechanisms for transfer. Where possible, we aim to provide coarse guidance delineating approaches which address requirements such as limiting environment interactions, maximising computational efficiency, and enhancing generalisation across varying axes of change. Finally, we analyse reasons contributing to the prevalence or scarcity of specific forms of transfer, the inherent potential behind pushing these frontiers, and underscore the significance of transitioning from designed to learned transfer.
A Generalist Dynamics Model for Control
Schubert, Ingmar, Zhang, Jingwei, Bruce, Jake, Bechtle, Sarah, Parisotto, Emilio, Riedmiller, Martin, Springenberg, Jost Tobias, Byravan, Arunkumar, Hasenclever, Leonard, Heess, Nicolas
Figure 1 | Schematic overview of the data regimes for which we show experimental results. These regimes are characterized by how much data from the target environment is available to the agent, and how much (potentially generalizable) experience has been collected in other environments. The experiments both demonstrate that TDMs are capable single-environment models (marked purple) and generalize across environments (marked yellow). If sufficient data from the target environment is available, we can learn a single-environment specialist model (section 5.1). If there are only small amounts of data from the target environment, but more data from other environments, a generalist model can be pre-trained and then fine-tuned on the target environment (section 5.2.1). Finally, if we are able to train a generalist model on large amounts of data from different environments, we can zero-shot apply this model to our target environment without fine-tuning (section 5.2.2). We also show an example for unsuccessful generalization (no color) in section E.
Equivariant Data Augmentation for Generalization in Offline Reinforcement Learning
Pinneri, Cristina, Bechtle, Sarah, Wulfmeier, Markus, Byravan, Arunkumar, Zhang, Jingwei, Whitney, William F., Riedmiller, Martin
We present a novel approach to address the challenge of generalization in offline reinforcement learning (RL), where the agent learns from a fixed dataset without any additional interaction with the environment. Specifically, we aim to improve the agent's ability to generalize to out-of-distribution goals. To achieve this, we propose to learn a dynamics model and check if it is equivariant with respect to a fixed type of transformation, namely translations in the state space. We then use an entropy regularizer to increase the equivariant set and augment the dataset with the resulting transformed samples. Finally, we learn a new policy offline based on the augmented dataset, with an off-the-shelf offline RL algorithm. Our experimental results demonstrate that our approach can greatly improve the test performance of the policy on the considered environments.
On Multi-objective Policy Optimization as a Tool for Reinforcement Learning: Case Studies in Offline RL and Finetuning
Abdolmaleki, Abbas, Huang, Sandy H., Vezzani, Giulia, Shahriari, Bobak, Springenberg, Jost Tobias, Mishra, Shruti, TB, Dhruva, Byravan, Arunkumar, Bousmalis, Konstantinos, Gyorgy, Andras, Szepesvari, Csaba, Hadsell, Raia, Heess, Nicolas, Riedmiller, Martin
Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives or constraints in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors. Often, the task reward and auxiliary objectives are in conflict, and in this paper we argue that this makes it natural to treat these cases as instances of multi-objective (MO) optimization problems. We demonstrate how this perspective allows us to develop novel and more effective RL algorithms. In particular, we focus on offline RL and finetuning as case studies, and show that existing approaches can be understood as MO algorithms relying on linear scalarization. We hypothesize that replacing linear scalarization with a better algorithm can improve performance. We introduce Distillation of a Mixture of Experts (DiME), a new MORL algorithm that outperforms linear scalarization and can be applied to these non-standard MO problems. We demonstrate that for offline RL, DiME leads to a simple new algorithm that outperforms state-of-the-art. For finetuning, we derive new algorithms that learn to outperform the teacher policy. Deep reinforcement learning (RL) algorithms have solved a number of challenging problems, including in games (Mnih et al., 2015; Silver et al., 2016), simulated continuous control (Heess et al., 2017; Peng et al., 2018), and robotics (OpenAI et al., 2018). The standard RL setting appeals through its simplicity: an agent acts in the environment and can discover complex solutions simply by maximizing cumulative discounted reward. In practice, however, the situation is often more complicated. For instance, without a carefully crafted reward function or sophisticated exploration strategy, learning may require hundreds of millions of environment interactions, or may not be possible at all. A number of strategies have been developed to mitigate the shortcomings of the pure RL paradigm. These include strategies that regularize the final solution, for instance by maximizing auxiliary rewards (Jaderberg et al., 2017) or the entropy of the policy (Mnih et al., 2016; Haarnoja et al., 2018).
Towards A Unified Agent with Foundation Models
Di Palo, Norman, Byravan, Arunkumar, Hasenclever, Leonard, Wulfmeier, Markus, Heess, Nicolas, Riedmiller, Martin
Language Models and Vision Language Models have recently demonstrated unprecedented capabilities in terms of understanding human intentions, reasoning, scene understanding, and planning-like behaviour, in text form, among many others. In this work, we investigate how to embed and leverage such abilities in Reinforcement Learning (RL) agents. We design a framework that uses language as the core reasoning tool, exploring how this enables an agent to tackle a series of fundamental RL challenges, such as efficient exploration, reusing experience data, scheduling skills, and learning from observations, which traditionally require separate, vertically designed algorithms. We test our method on a sparse-reward simulated robotic manipulation environment, where a robot needs to stack a set of objects. We demonstrate substantial performance improvements over baselines in exploration efficiency and ability to reuse data from offline datasets, and illustrate how to reuse learned skills to solve novel tasks or imitate videos of human experts. In recent years, the literature has seen a series of remarkable Deep Learning (DL) success stories (3), with breakthroughs particularly in the fields of Natural Language Processing (4; 19; 8; 29) and Computer Vision (2; 25; 36; 37).