Pot, Etienne
Gemma 3 Technical Report
Gemma Team, null, Kamath, Aishwarya, Ferret, Johan, Pathak, Shreya, Vieillard, Nino, Merhej, Ramona, Perrin, Sarah, Matejovicova, Tatiana, Ramé, Alexandre, Rivière, Morgane, Rouillard, Louis, Mesnard, Thomas, Cideron, Geoffrey, Grill, Jean-bastien, Ramos, Sabela, Yvinec, Edouard, Casbon, Michelle, Pot, Etienne, Penchev, Ivo, Liu, Gaël, Visin, Francesco, Kenealy, Kathleen, Beyer, Lucas, Zhai, Xiaohai, Tsitsulin, Anton, Busa-Fekete, Robert, Feng, Alex, Sachdeva, Noveen, Coleman, Benjamin, Gao, Yi, Mustafa, Basil, Barr, Iain, Parisotto, Emilio, Tian, David, Eyal, Matan, Cherry, Colin, Peter, Jan-Thorsten, Sinopalnikov, Danila, Bhupatiraju, Surya, Agarwal, Rishabh, Kazemi, Mehran, Malkin, Dan, Kumar, Ravin, Vilar, David, Brusilovsky, Idan, Luo, Jiaming, Steiner, Andreas, Friesen, Abe, Sharma, Abhanshu, Sharma, Abheesht, Gilady, Adi Mayrav, Goedeckemeyer, Adrian, Saade, Alaa, Feng, Alex, Kolesnikov, Alexander, Bendebury, Alexei, Abdagic, Alvin, Vadi, Amit, György, András, Pinto, André Susano, Das, Anil, Bapna, Ankur, Miech, Antoine, Yang, Antoine, Paterson, Antonia, Shenoy, Ashish, Chakrabarti, Ayan, Piot, Bilal, Wu, Bo, Shahriari, Bobak, Petrini, Bryce, Chen, Charlie, Lan, Charline Le, Choquette-Choo, Christopher A., Carey, CJ, Brick, Cormac, Deutsch, Daniel, Eisenbud, Danielle, Cattle, Dee, Cheng, Derek, Paparas, Dimitris, Sreepathihalli, Divyashree Shivakumar, Reid, Doug, Tran, Dustin, Zelle, Dustin, Noland, Eric, Huizenga, Erwin, Kharitonov, Eugene, Liu, Frederick, Amirkhanyan, Gagik, Cameron, Glenn, Hashemi, Hadi, Klimczak-Plucińska, Hanna, Singh, Harman, Mehta, Harsh, Lehri, Harshal Tushar, Hazimeh, Hussein, Ballantyne, Ian, Szpektor, Idan, Nardini, Ivan, Pouget-Abadie, Jean, Chan, Jetha, Stanton, Joe, Wieting, John, Lai, Jonathan, Orbay, Jordi, Fernandez, Joseph, Newlan, Josh, Ji, Ju-yeong, Singh, Jyotinder, Black, Kat, Yu, Kathy, Hui, Kevin, Vodrahalli, Kiran, Greff, Klaus, Qiu, Linhai, Valentine, Marcella, Coelho, Marina, Ritter, Marvin, Hoffman, Matt, Watson, Matthew, Chaturvedi, Mayank, Moynihan, Michael, Ma, Min, Babar, Nabila, Noy, Natasha, Byrd, Nathan, Roy, Nick, Momchev, Nikola, Chauhan, Nilay, Sachdeva, Noveen, Bunyan, Oskar, Botarda, Pankil, Caron, Paul, Rubenstein, Paul Kishan, Culliton, Phil, Schmid, Philipp, Sessa, Pier Giuseppe, Xu, Pingmei, Stanczyk, Piotr, Tafti, Pouya, Shivanna, Rakesh, Wu, Renjie, Pan, Renke, Rokni, Reza, Willoughby, Rob, Vallu, Rohith, Mullins, Ryan, Jerome, Sammy, Smoot, Sara, Girgin, Sertan, Iqbal, Shariq, Reddy, Shashir, Sheth, Shruti, Põder, Siim, Bhatnagar, Sijal, Panyam, Sindhu Raghuram, Eiger, Sivan, Zhang, Susan, Liu, Tianqi, Yacovone, Trevor, Liechty, Tyler, Kalra, Uday, Evci, Utku, Misra, Vedant, Roseberry, Vincent, Feinberg, Vlad, Kolesnikov, Vlad, Han, Woohyun, Kwon, Woosuk, Chen, Xi, Chow, Yinlam, Zhu, Yuvein, Wei, Zichuan, Egyed, Zoltan, Cotruta, Victor, Giang, Minh, Kirk, Phoebe, Rao, Anand, Black, Kat, Babar, Nabila, Lo, Jessica, Moreira, Erica, Martins, Luiz Gustavo, Sanseviero, Omar, Gonzalez, Lucas, Gleicher, Zach, Warkentin, Tris, Mirrokni, Vahab, Senter, Evan, Collins, Eli, Barral, Joelle, Ghahramani, Zoubin, Hadsell, Raia, Matias, Yossi, Sculley, D., Petrov, Slav, Fiedel, Noah, Shazeer, Noam, Vinyals, Oriol, Dean, Jeff, Hassabis, Demis, Kavukcuoglu, Koray, Farabet, Clement, Buchatskaya, Elena, Alayrac, Jean-Baptiste, Anil, Rohan, Dmitry, null, Lepikhin, null, Borgeaud, Sebastian, Bachem, Olivier, Joulin, Armand, Andreev, Alek, Hardin, Cassidy, Dadashi, Robert, Hussenot, Léonard
We introduce Gemma 3, a multimodal addition to the Gemma family of lightweight open models, ranging in scale from 1 to 27 billion parameters. This version introduces vision understanding abilities, a wider coverage of languages and longer context - at least 128K tokens. We also change the architecture of the model to reduce the KV-cache memory that tends to explode with long context. This is achieved by increasing the ratio of local to global attention layers, and keeping the span on local attention short. The Gemma 3 models are trained with distillation and achieve superior performance to Gemma 2 for both pre-trained and instruction finetuned versions. In particular, our novel post-training recipe significantly improves the math, chat, instruction-following and multilingual abilities, making Gemma3-4B-IT competitive with Gemma2-27B-IT and Gemma3-27B-IT comparable to Gemini-1.5-Pro across benchmarks. We release all our models to the community.
Scaling 4D Representations
Carreira, João, Gokay, Dilara, King, Michael, Zhang, Chuhan, Rocco, Ignacio, Mahendran, Aravindh, Keck, Thomas Albert, Heyward, Joseph, Koppula, Skanda, Pot, Etienne, Erdogan, Goker, Hasson, Yana, Yang, Yi, Greff, Klaus, Moing, Guillaume Le, van Steenkiste, Sjoerd, Zoran, Daniel, Hudson, Drew A., Vélez, Pedro, Polanía, Luisa, Friedman, Luke, Duvarney, Chris, Goroshin, Ross, Allen, Kelsey, Walker, Jacob, Kabra, Rishabh, Aboussouan, Eric, Sun, Jennifer, Kipf, Thomas, Doersch, Carl, Pătrăucean, Viorica, Damen, Dima, Luc, Pauline, Sajjadi, Mehdi S. M., Zisserman, Andrew
Scaling has not yet been convincingly demonstrated for pure self-supervised learning from video. However, prior work has focused evaluations on semantic-related tasks $\unicode{x2013}$ action classification, ImageNet classification, etc. In this paper we focus on evaluating self-supervised learning on non-semantic vision tasks that are more spatial (3D) and temporal (+1D = 4D), such as camera pose estimation, point and object tracking, and depth estimation. We show that by learning from very large video datasets, masked auto-encoding (MAE) with transformer video models actually scales, consistently improving performance on these 4D tasks, as model size increases from 20M all the way to the largest by far reported self-supervised video model $\unicode{x2013}$ 22B parameters. Rigorous apples-to-apples comparison with many recent image and video models demonstrates the benefits of scaling 4D representations.
Moving Off-the-Grid: Scene-Grounded Video Representations
van Steenkiste, Sjoerd, Zoran, Daniel, Yang, Yi, Rubanova, Yulia, Kabra, Rishabh, Doersch, Carl, Gokay, Dilara, Heyward, Joseph, Pot, Etienne, Greff, Klaus, Hudson, Drew A., Keck, Thomas Albert, Carreira, Joao, Dosovitskiy, Alexey, Sajjadi, Mehdi S. M., Kipf, Thomas
Current vision models typically maintain a fixed correspondence between their representation structure and image space. Each layer comprises a set of tokens arranged "on-the-grid," which biases patches or tokens to encode information at a specific spatio(-temporal) location. In this work we present Moving Off-the-Grid (MooG), a self-supervised video representation model that offers an alternative approach, allowing tokens to move "off-the-grid" to better enable them to represent scene elements consistently, even as they move across the image plane through time. By using a combination of cross-attention and positional embeddings we disentangle the representation structure and image structure. We find that a simple self-supervised objective--next frame prediction--trained on video data, results in a set of latent tokens which bind to specific scene structures and track them as they move. We demonstrate the usefulness of MooG's learned representation both qualitatively and quantitatively by training readouts on top of the learned representation on a variety of downstream tasks. We show that MooG can provide a strong foundation for different vision tasks when compared to "on-the-grid" baselines.
RUST: Latent Neural Scene Representations from Unposed Imagery
Sajjadi, Mehdi S. M., Mahendran, Aravindh, Kipf, Thomas, Pot, Etienne, Duckworth, Daniel, Lucic, Mario, Greff, Klaus
Inferring the structure of 3D scenes from 2D observations is a fundamental challenge in computer vision. Recently popularized approaches based on neural scene representations have achieved tremendous impact and have been applied across a variety of applications. One of the major remaining challenges in this space is training a single model which can provide latent representations which effectively generalize beyond a single scene. Scene Representation Transformer (SRT) has shown promise in this direction, but scaling it to a larger set of diverse scenes is challenging and necessitates accurately posed ground truth data. To address this problem, we propose RUST (Really Unposed Scene representation Transformer), a pose-free approach to novel view synthesis trained on RGB images alone. Our main insight is that one can train a Pose Encoder that peeks at the target image and learns a latent pose embedding which is used by the decoder for view synthesis. We perform an empirical investigation into the learned latent pose structure and show that it allows meaningful test-time camera transformations and accurate explicit pose readouts. Perhaps surprisingly, RUST achieves similar quality as methods which have access to perfect camera pose, thereby unlocking the potential for large-scale training of amortized neural scene representations.
Scene Representation Transformer: Geometry-Free Novel View Synthesis Through Set-Latent Scene Representations
Sajjadi, Mehdi S. M., Meyer, Henning, Pot, Etienne, Bergmann, Urs, Greff, Klaus, Radwan, Noha, Vora, Suhani, Lucic, Mario, Duckworth, Daniel, Dosovitskiy, Alexey, Uszkoreit, Jakob, Funkhouser, Thomas, Tagliasacchi, Andrea
A classical problem in computer vision is to infer a 3D scene representation from few images that can be used to render novel views at interactive rates. Previous work focuses on reconstructing pre-defined 3D representations, e.g. textured meshes, or implicit representations, e.g. radiance fields, and often requires input images with precise camera poses and long processing times for each novel scene. In this work, we propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area, infers a "set-latent scene representation", and synthesises novel views, all in a single feed-forward pass. To calculate the scene representation, we propose a generalization of the Vision Transformer to sets of images, enabling global information integration, and hence 3D reasoning. An efficient decoder transformer parameterizes the light field by attending into the scene representation to render novel views. Learning is supervised end-to-end by minimizing a novel-view reconstruction error. We show that this method outperforms recent baselines in terms of PSNR and speed on synthetic datasets, including a new dataset created for the paper. Further, we demonstrate that SRT scales to support interactive visualization and semantic segmentation of real-world outdoor environments using Street View imagery.