Duddu, Sai Meher Karthik
Gemini Embedding: Generalizable Embeddings from Gemini
Lee, Jinhyuk, Chen, Feiyang, Dua, Sahil, Cer, Daniel, Shanbhogue, Madhuri, Naim, Iftekhar, Ábrego, Gustavo Hernández, Li, Zhe, Chen, Kaifeng, Vera, Henrique Schechter, Ren, Xiaoqi, Zhang, Shanfeng, Salz, Daniel, Boratko, Michael, Han, Jay, Chen, Blair, Huang, Shuo, Rao, Vikram, Suganthan, Paul, Han, Feng, Doumanoglou, Andreas, Gupta, Nithi, Moiseev, Fedor, Yip, Cathy, Jain, Aashi, Baumgartner, Simon, Shahi, Shahrokh, Gomez, Frank Palma, Mariserla, Sandeep, Choi, Min, Shah, Parashar, Goenka, Sonam, Chen, Ke, Xia, Ye, Chen, Koert, Duddu, Sai Meher Karthik, Chen, Yichang, Walker, Trevor, Zhou, Wenlei, Ghiya, Rakesh, Gleicher, Zach, Gill, Karan, Dong, Zhe, Seyedhosseini, Mojtaba, Sung, Yunhsuan, Hoffmann, Raphael, Duerig, Tom
Embedding models, which transform inputs into dense vector representations, are pivotal for capturing semantic information across various domains and modalities. Text embedding models represent words and sentences as vectors, strategically positioning semantically similar texts in close proximity within the embedding space (Gao et al., 2021; Le and Mikolov, 2014; Reimers and Gurevych, 2019). Recent research has focused on developing general-purpose embedding models capable of excelling in diverse downstream tasks, including information retrieval, clustering, and classification (Cer et al., 2018; Muennighoff et al., 2023). Leveraging their vast pre-training knowledge, large language models (LLMs) have emerged as a promising avenue for constructing such general-purpose embedding models, with the potential to significantly enhance performance across a broad spectrum of applications (Anil et al., 2023a,b; Brown et al., 2020). The integration of LLMs has revolutionized the development of high-quality embedding models through two primary approaches. Firstly, LLMs have been employed to refine training datasets by generating higher quality examples. Techniques such as hard negative mining (Lee et al., 2024) and synthetic data generation (Dai et al., 2022; Wang et al., 2023) enable the distillation of LLM knowledge into smaller, more efficient embedding models, leading to substantial performance gains. Secondly, recognizing that the embedding model parameters are frequently initialized from language models (Devlin et al., 2019; Karpukhin et al., 2020), researchers have explored leveraging LLM parameters directly for initialization (Ni et al., 2021).
Gecko: Versatile Text Embeddings Distilled from Large Language Models
Lee, Jinhyuk, Dai, Zhuyun, Ren, Xiaoqi, Chen, Blair, Cer, Daniel, Cole, Jeremy R., Hui, Kai, Boratko, Michael, Kapadia, Rajvi, Ding, Wen, Luan, Yi, Duddu, Sai Meher Karthik, Abrego, Gustavo Hernandez, Shi, Weiqiang, Gupta, Nithi, Kusupati, Aditya, Jain, Prateek, Jonnalagadda, Siddhartha Reddy, Chang, Ming-Wei, Naim, Iftekhar
Text embedding models represent natural language as dense vectors, positioning semantically similar text near each other within the embedding space (Gao et al., 2021; Le and Mikolov, 2014; Reimers and Gurevych, 2019). These embeddings are commonly used for a wide range of downstream tasks including document retrieval, sentence similarity, classification, and clustering (Muennighoff et al., 2023). Instead of building separate embedding models for each downstream task, recent efforts seek to create a single embedding model supporting many tasks. The recent development of general-purpose text embedding models presents a challenge: these models require large amounts of training data to comprehensively cover desired domains and skills. Recent embedding efforts have focused on using extensive collections of training examples (Li et al., 2023; Wang et al., 2022).
Rethinking the Role of Token Retrieval in Multi-Vector Retrieval
Lee, Jinhyuk, Dai, Zhuyun, Duddu, Sai Meher Karthik, Lei, Tao, Naim, Iftekhar, Chang, Ming-Wei, Zhao, Vincent Y.
Multi-vector retrieval models such as ColBERT [Khattab and Zaharia, 2020] allow token-level interactions between queries and documents, and hence achieve state of the art on many information retrieval benchmarks. However, their non-linear scoring function cannot be scaled to millions of documents, necessitating a three-stage process for inference: retrieving initial candidates via token retrieval, accessing all token vectors, and scoring the initial candidate documents. The non-linear scoring function is applied over all token vectors of each candidate document, making the inference process complicated and slow. In this paper, we aim to simplify the multi-vector retrieval by rethinking the role of token retrieval. We present XTR, ConteXtualized Token Retriever, which introduces a simple, yet novel, objective function that encourages the model to retrieve the most important document tokens first. The improvement to token retrieval allows XTR to rank candidates only using the retrieved tokens rather than all tokens in the document, and enables a newly designed scoring stage that is two-to-three orders of magnitude cheaper than that of ColBERT. On the popular BEIR benchmark, XTR advances the state-of-the-art by 2.8 nDCG@10 without any distillation. Detailed analysis confirms our decision to revisit the token retrieval stage, as XTR demonstrates much better recall of the token retrieval stage compared to ColBERT.