Piot, Bilal
Gemma 3 Technical Report
Gemma Team, null, Kamath, Aishwarya, Ferret, Johan, Pathak, Shreya, Vieillard, Nino, Merhej, Ramona, Perrin, Sarah, Matejovicova, Tatiana, Ramé, Alexandre, Rivière, Morgane, Rouillard, Louis, Mesnard, Thomas, Cideron, Geoffrey, Grill, Jean-bastien, Ramos, Sabela, Yvinec, Edouard, Casbon, Michelle, Pot, Etienne, Penchev, Ivo, Liu, Gaël, Visin, Francesco, Kenealy, Kathleen, Beyer, Lucas, Zhai, Xiaohai, Tsitsulin, Anton, Busa-Fekete, Robert, Feng, Alex, Sachdeva, Noveen, Coleman, Benjamin, Gao, Yi, Mustafa, Basil, Barr, Iain, Parisotto, Emilio, Tian, David, Eyal, Matan, Cherry, Colin, Peter, Jan-Thorsten, Sinopalnikov, Danila, Bhupatiraju, Surya, Agarwal, Rishabh, Kazemi, Mehran, Malkin, Dan, Kumar, Ravin, Vilar, David, Brusilovsky, Idan, Luo, Jiaming, Steiner, Andreas, Friesen, Abe, Sharma, Abhanshu, Sharma, Abheesht, Gilady, Adi Mayrav, Goedeckemeyer, Adrian, Saade, Alaa, Feng, Alex, Kolesnikov, Alexander, Bendebury, Alexei, Abdagic, Alvin, Vadi, Amit, György, András, Pinto, André Susano, Das, Anil, Bapna, Ankur, Miech, Antoine, Yang, Antoine, Paterson, Antonia, Shenoy, Ashish, Chakrabarti, Ayan, Piot, Bilal, Wu, Bo, Shahriari, Bobak, Petrini, Bryce, Chen, Charlie, Lan, Charline Le, Choquette-Choo, Christopher A., Carey, CJ, Brick, Cormac, Deutsch, Daniel, Eisenbud, Danielle, Cattle, Dee, Cheng, Derek, Paparas, Dimitris, Sreepathihalli, Divyashree Shivakumar, Reid, Doug, Tran, Dustin, Zelle, Dustin, Noland, Eric, Huizenga, Erwin, Kharitonov, Eugene, Liu, Frederick, Amirkhanyan, Gagik, Cameron, Glenn, Hashemi, Hadi, Klimczak-Plucińska, Hanna, Singh, Harman, Mehta, Harsh, Lehri, Harshal Tushar, Hazimeh, Hussein, Ballantyne, Ian, Szpektor, Idan, Nardini, Ivan, Pouget-Abadie, Jean, Chan, Jetha, Stanton, Joe, Wieting, John, Lai, Jonathan, Orbay, Jordi, Fernandez, Joseph, Newlan, Josh, Ji, Ju-yeong, Singh, Jyotinder, Black, Kat, Yu, Kathy, Hui, Kevin, Vodrahalli, Kiran, Greff, Klaus, Qiu, Linhai, Valentine, Marcella, Coelho, Marina, Ritter, Marvin, Hoffman, Matt, Watson, Matthew, Chaturvedi, Mayank, Moynihan, Michael, Ma, Min, Babar, Nabila, Noy, Natasha, Byrd, Nathan, Roy, Nick, Momchev, Nikola, Chauhan, Nilay, Sachdeva, Noveen, Bunyan, Oskar, Botarda, Pankil, Caron, Paul, Rubenstein, Paul Kishan, Culliton, Phil, Schmid, Philipp, Sessa, Pier Giuseppe, Xu, Pingmei, Stanczyk, Piotr, Tafti, Pouya, Shivanna, Rakesh, Wu, Renjie, Pan, Renke, Rokni, Reza, Willoughby, Rob, Vallu, Rohith, Mullins, Ryan, Jerome, Sammy, Smoot, Sara, Girgin, Sertan, Iqbal, Shariq, Reddy, Shashir, Sheth, Shruti, Põder, Siim, Bhatnagar, Sijal, Panyam, Sindhu Raghuram, Eiger, Sivan, Zhang, Susan, Liu, Tianqi, Yacovone, Trevor, Liechty, Tyler, Kalra, Uday, Evci, Utku, Misra, Vedant, Roseberry, Vincent, Feinberg, Vlad, Kolesnikov, Vlad, Han, Woohyun, Kwon, Woosuk, Chen, Xi, Chow, Yinlam, Zhu, Yuvein, Wei, Zichuan, Egyed, Zoltan, Cotruta, Victor, Giang, Minh, Kirk, Phoebe, Rao, Anand, Black, Kat, Babar, Nabila, Lo, Jessica, Moreira, Erica, Martins, Luiz Gustavo, Sanseviero, Omar, Gonzalez, Lucas, Gleicher, Zach, Warkentin, Tris, Mirrokni, Vahab, Senter, Evan, Collins, Eli, Barral, Joelle, Ghahramani, Zoubin, Hadsell, Raia, Matias, Yossi, Sculley, D., Petrov, Slav, Fiedel, Noah, Shazeer, Noam, Vinyals, Oriol, Dean, Jeff, Hassabis, Demis, Kavukcuoglu, Koray, Farabet, Clement, Buchatskaya, Elena, Alayrac, Jean-Baptiste, Anil, Rohan, Dmitry, null, Lepikhin, null, Borgeaud, Sebastian, Bachem, Olivier, Joulin, Armand, Andreev, Alek, Hardin, Cassidy, Dadashi, Robert, Hussenot, Léonard
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at least 128K tokens. We also change the architecture of the model to reduce the KV-cache memory that tends to explode with long context. This is achieved by increasing the ratio of local to global attention layers, and keeping the span on local attention short. The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2 for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks. We release all our models to the community.
Preference Optimization as Probabilistic Inference
Abdolmaleki, Abbas, Piot, Bilal, Shahriari, Bobak, Springenberg, Jost Tobias, Hertweck, Tim, Joshi, Rishabh, Oh, Junhyuk, Bloesch, Michael, Lampe, Thomas, Heess, Nicolas, Buchli, Jonas, Riedmiller, Martin
The use of preference annotated data for training machine learning models has a long history going back to early algorithms for recommender systems and market research (Bonilla et al., 2010; Boutilier, 2002; Guo and Sanner, 2010). These days preference optimization algorithms are receiving renewed attention since they are a natural candidate for shaping the outputs of deep learning systems, such as large language models (Ouyang et al., 2022; Team et al., 2024) or control policies, via human feedback (Azar et al., 2023; Christiano et al., 2017; Rafailov et al., 2023). Arguably, preference optimization algorithms can also be a natural choice even when direct human feedback is not available but one instead aims to optimize a machine learning model based on feedback from a hand-coded or learned critic function (judging desirability of solutions). Here preference optimization methods are useful since they let us optimize the model to achieve desired outcomes based on relative rankings between outcomes alone (rather than requiring absolute labels or carefully crafted reward functions). Among preference optimization approaches, those based on directly using preference data - as opposed to casting preference optimization as reinforcement learning from (human) feedback - such as DPO (Rafailov et al., 2023), have emerged as particularly successful since they only require access to an offline dataset of paired preference data, and are fairly robust to application domain and hyperparameter settings. However, algorithms within this class make specific assumptions tailored to their application domain. They were designed to optimize LLMs from human feedback in the form of comparisons of generated sentences and thus, by design, require paired preference data (since they directly model a specific choice of preference distribution). We are interested in finding algorithms that are more flexible, and applicable in settings where the assumptions underlying DPO do not apply.
RRM: Robust Reward Model Training Mitigates Reward Hacking
Liu, Tianqi, Xiong, Wei, Ren, Jie, Chen, Lichang, Wu, Junru, Joshi, Rishabh, Gao, Yang, Shen, Jiaming, Qin, Zhen, Yu, Tianhe, Sohn, Daniel, Makarova, Anastasiia, Liu, Jeremiah, Liu, Yuan, Piot, Bilal, Ittycheriah, Abe, Kumar, Aviral, Saleh, Mohammad
Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. However, traditional RM training, which relies on response pairs tied to specific prompts, struggles to disentangle prompt-driven preferences from prompt-independent artifacts, such as response length and format. In this work, we expose a fundamental limitation of current RM training methods, where RMs fail to effectively distinguish between contextual signals and irrelevant artifacts when determining preferences. To address this, we introduce a causal framework that learns preferences independent of these artifacts and propose a novel data augmentation technique designed to eliminate them. Extensive experiments show that our approach successfully filters out undesirable artifacts, yielding a more robust reward model (RRM). Our RRM improves the performance of a pairwise reward model trained on Gemma-2-9b-it, on Reward-Bench, increasing accuracy from 80.61% to 84.15%. Additionally, we train two DPO policies using both the RM and RRM, demonstrating that the RRM significantly enhances DPO-aligned policies, improving MT-Bench scores from 7.27 to 8.31 and length-controlled win-rates in AlpacaEval-2 from 33.46% to 52.49%. Reinforcement Learning from Human Feedback (RLHF) has become a cornerstone in aligning large language models (LLMs) with human preferences to produce responses that are more helpful, honest, and harmless (Ouyang et al., 2022; Bai et al., 2022a). This approach involves training a reward model (RM) on human feedback, which then guides the LLM to generate high-quality responses through reinforcement learning.
Offline Regularised Reinforcement Learning for Large Language Models Alignment
Richemond, Pierre Harvey, Tang, Yunhao, Guo, Daniel, Calandriello, Daniele, Azar, Mohammad Gheshlaghi, Rafailov, Rafael, Pires, Bernardo Avila, Tarassov, Eugene, Spangher, Lucas, Ellsworth, Will, Severyn, Aliaksei, Mallinson, Jonathan, Shani, Lior, Shamir, Gil, Joshi, Rishabh, Liu, Tianqi, Munos, Remi, Piot, Bilal
The dominant framework for alignment of large language models (LLM), whether through reinforcement learning from human feedback or direct preference optimisation, is to learn from preference data. This involves building datasets where each element is a quadruplet composed of a prompt, two independent responses (completions of the prompt) and a human preference between the two independent responses, yielding a preferred and a dis-preferred response. Such data is typically scarce and expensive to collect. On the other hand, \emph{single-trajectory} datasets where each element is a triplet composed of a prompt, a response and a human feedback is naturally more abundant. The canonical element of such datasets is for instance an LLM's response to a user's prompt followed by a user's feedback such as a thumbs-up/down. Consequently, in this work, we propose DRO, or \emph{Direct Reward Optimisation}, as a framework and associated algorithms that do not require pairwise preferences. DRO uses a simple mean-squared objective that can be implemented in various ways. We validate our findings empirically, using T5 encoder-decoder language models, and show DRO's performance over selected baselines such as Kahneman-Tversky Optimization (KTO). Thus, we confirm that DRO is a simple and empirically compelling method for single-trajectory policy optimisation.
Multi-turn Reinforcement Learning from Preference Human Feedback
Shani, Lior, Rosenberg, Aviv, Cassel, Asaf, Lang, Oran, Calandriello, Daniele, Zipori, Avital, Noga, Hila, Keller, Orgad, Piot, Bilal, Szpektor, Idan, Hassidim, Avinatan, Matias, Yossi, Munos, Rémi
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models (LLMs) with human preferences, allowing LLMs to demonstrate remarkable abilities in various tasks. Existing methods work by emulating the preferences at the single decision (turn) level, limiting their capabilities in settings that require planning or multi-turn interactions to achieve a long-term goal. In this paper, we address this issue by developing novel methods for Reinforcement Learning (RL) from preference feedback between two full multi-turn conversations. In the tabular setting, we present a novel mirror-descent-based policy optimization algorithm for the general multi-turn preference-based RL problem, and prove its convergence to Nash equilibrium. To evaluate performance, we create a new environment, Education Dialogue, where a teacher agent guides a student in learning a random topic, and show that a deep RL variant of our algorithm outperforms RLHF baselines. Finally, we show that in an environment with explicit rewards, our algorithm recovers the same performance as a reward-based RL baseline, despite relying solely on a weaker preference signal.
Human Alignment of Large Language Models through Online Preference Optimisation
Calandriello, Daniele, Guo, Daniel, Munos, Remi, Rowland, Mark, Tang, Yunhao, Pires, Bernardo Avila, Richemond, Pierre Harvey, Lan, Charline Le, Valko, Michal, Liu, Tianqi, Joshi, Rishabh, Zheng, Zeyu, Piot, Bilal
Ensuring alignment of language models' outputs with human preferences is critical to guarantee a useful, safe, and pleasant user experience. Thus, human alignment has been extensively studied recently and several methods such as Reinforcement Learning from Human Feedback (RLHF), Direct Policy Optimisation (DPO) and Sequence Likelihood Calibration (SLiC) have emerged. In this paper, our contribution is two-fold. First, we show the equivalence between two recent alignment methods, namely Identity Policy Optimisation (IPO) and Nash Mirror Descent (Nash-MD). Second, we introduce a generalisation of IPO, named IPO-MD, that leverages the regularised sampling approach proposed by Nash-MD. This equivalence may seem surprising at first sight, since IPO is an offline method whereas Nash-MD is an online method using a preference model. However, this equivalence can be proven when we consider the online version of IPO, that is when both generations are sampled by the online policy and annotated by a trained preference model. Optimising the IPO loss with such a stream of data becomes then equivalent to finding the Nash equilibrium of the preference model through self-play. Building on this equivalence, we introduce the IPO-MD algorithm that generates data with a mixture policy (between the online and reference policy) similarly as the general Nash-MD algorithm. We compare online-IPO and IPO-MD to different online versions of existing losses on preference data such as DPO and SLiC on a summarisation task.
Generalized Preference Optimization: A Unified Approach to Offline Alignment
Tang, Yunhao, Guo, Zhaohan Daniel, Zheng, Zeyu, Calandriello, Daniele, Munos, Rémi, Rowland, Mark, Richemond, Pierre Harvey, Valko, Michal, Pires, Bernardo Ávila, Piot, Bilal
Offline preference optimization allows fine-tuning large models directly from offline data, and has proved effective in recent alignment practices. We propose generalized preference optimization (GPO), a family of offline losses parameterized by a general class of convex functions. GPO enables a unified view over preference optimization, encompassing existing algorithms such as DPO, IPO and SLiC as special cases, while naturally introducing new variants. The GPO framework also sheds light on how offline algorithms enforce regularization, through the design of the convex function that defines the loss. Our analysis and experiments reveal the connections and subtle differences between the offline regularization and the KL divergence regularization intended by the canonical RLHF formulation. In all, our results present new algorithmic toolkits and empirical insights to alignment practitioners.
Direct Language Model Alignment from Online AI Feedback
Guo, Shangmin, Zhang, Biao, Liu, Tianlin, Liu, Tianqi, Khalman, Misha, Llinares, Felipe, Rame, Alexandre, Mesnard, Thomas, Zhao, Yao, Piot, Bilal, Ferret, Johan, Blondel, Mathieu
Direct alignment from preferences (DAP) methods, such as DPO, have recently emerged as efficient alternatives to reinforcement learning from human feedback (RLHF), that do not require a separate reward model. However, the preference datasets used in DAP methods are usually collected ahead of training and never updated, thus the feedback is purely offline. Moreover, responses in these datasets are often sampled from a language model distinct from the one being aligned, and since the model evolves over training, the alignment phase is inevitably off-policy. In this study, we posit that online feedback is key and improves DAP methods. Our method, online AI feedback (OAIF), uses an LLM as annotator: on each training iteration, we sample two responses from the current model and prompt the LLM annotator to choose which one is preferred, thus providing online feedback. Despite its simplicity, we demonstrate via human evaluation in several tasks that OAIF outperforms both offline DAP and RLHF methods. We further show that the feedback leveraged in OAIF is easily controllable, via instruction prompts to the LLM annotator.
Nash Learning from Human Feedback
Munos, Rémi, Valko, Michal, Calandriello, Daniele, Azar, Mohammad Gheshlaghi, Rowland, Mark, Guo, Zhaohan Daniel, Tang, Yunhao, Geist, Matthieu, Mesnard, Thomas, Michi, Andrea, Selvi, Marco, Girgin, Sertan, Momchev, Nikola, Bachem, Olivier, Mankowitz, Daniel J., Precup, Doina, Piot, Bilal
Reinforcement learning from human feedback (RLHF) has emerged as the main paradigm for aligning large language models (LLMs) with human preferences. Typically, RLHF involves the initial step of learning a reward model from human feedback, often expressed as preferences between pairs of text generations produced by a pre-trained LLM. Subsequently, the LLM's policy is fine-tuned by optimizing it to maximize the reward model through a reinforcement learning algorithm. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In this study, we introduce an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF). In the context of a tabular policy representation, we present a novel algorithmic solution, Nash-MD, founded on the principles of mirror descent. This algorithm produces a sequence of policies, with the last iteration converging to the regularized Nash equilibrium. Additionally, we explore parametric representations of policies and introduce gradient descent algorithms for deep-learning architectures. To demonstrate the effectiveness of our approach, we present experimental results involving the fine-tuning of a LLM for a text summarization task. We believe NLHF offers a compelling avenue for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences.
A General Theoretical Paradigm to Understand Learning from Human Preferences
Azar, Mohammad Gheshlaghi, Rowland, Mark, Piot, Bilal, Guo, Daniel, Calandriello, Daniele, Valko, Michal, Munos, Rémi
The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called $\Psi$PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of $\Psi$PO) and to identify their potential pitfalls. We then consider another special case for $\Psi$PO by setting $\Psi$ simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.