Not enough data to create a plot.
Try a different view from the menu above.
Sung, Yunhsuan
Gemini Embedding: Generalizable Embeddings from Gemini
Lee, Jinhyuk, Chen, Feiyang, Dua, Sahil, Cer, Daniel, Shanbhogue, Madhuri, Naim, Iftekhar, Ábrego, Gustavo Hernández, Li, Zhe, Chen, Kaifeng, Vera, Henrique Schechter, Ren, Xiaoqi, Zhang, Shanfeng, Salz, Daniel, Boratko, Michael, Han, Jay, Chen, Blair, Huang, Shuo, Rao, Vikram, Suganthan, Paul, Han, Feng, Doumanoglou, Andreas, Gupta, Nithi, Moiseev, Fedor, Yip, Cathy, Jain, Aashi, Baumgartner, Simon, Shahi, Shahrokh, Gomez, Frank Palma, Mariserla, Sandeep, Choi, Min, Shah, Parashar, Goenka, Sonam, Chen, Ke, Xia, Ye, Chen, Koert, Duddu, Sai Meher Karthik, Chen, Yichang, Walker, Trevor, Zhou, Wenlei, Ghiya, Rakesh, Gleicher, Zach, Gill, Karan, Dong, Zhe, Seyedhosseini, Mojtaba, Sung, Yunhsuan, Hoffmann, Raphael, Duerig, Tom
Embedding models, which transform inputs into dense vector representations, are pivotal for capturing semantic information across various domains and modalities. Text embedding models represent words and sentences as vectors, strategically positioning semantically similar texts in close proximity within the embedding space (Gao et al., 2021; Le and Mikolov, 2014; Reimers and Gurevych, 2019). Recent research has focused on developing general-purpose embedding models capable of excelling in diverse downstream tasks, including information retrieval, clustering, and classification (Cer et al., 2018; Muennighoff et al., 2023). Leveraging their vast pre-training knowledge, large language models (LLMs) have emerged as a promising avenue for constructing such general-purpose embedding models, with the potential to significantly enhance performance across a broad spectrum of applications (Anil et al., 2023a,b; Brown et al., 2020). The integration of LLMs has revolutionized the development of high-quality embedding models through two primary approaches. Firstly, LLMs have been employed to refine training datasets by generating higher quality examples. Techniques such as hard negative mining (Lee et al., 2024) and synthetic data generation (Dai et al., 2022; Wang et al., 2023) enable the distillation of LLM knowledge into smaller, more efficient embedding models, leading to substantial performance gains. Secondly, recognizing that the embedding model parameters are frequently initialized from language models (Devlin et al., 2019; Karpukhin et al., 2020), researchers have explored leveraging LLM parameters directly for initialization (Ni et al., 2021).
Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models
Aksitov, Renat, Chang, Chung-Ching, Reitter, David, Shakeri, Siamak, Sung, Yunhsuan
Despite recent progress, it has been difficult to prevent semantic hallucinations in generative Large Language Models. One common solution to this is augmenting LLMs with a retrieval system and making sure that the generated output is attributable to the retrieved information. Given this new added constraint, it is plausible to expect that the overall quality of the output will be affected, for example, in terms of fluency. Can scaling language models help? Here we examine the relationship between fluency and attribution in LLMs prompted with retrieved evidence in knowledge-heavy dialog settings. Our experiments were implemented with a set of auto-metrics that are aligned with human preferences. They were used to evaluate a large set of generations, produced under varying parameters of LLMs and supplied context. We show that larger models tend to do much better in both fluency and attribution, and that (naively) using top-k retrieval versus top-1 retrieval improves attribution but hurts fluency. We next propose a recipe that could allow smaller models to both close the gap with larger models and preserve the benefits of top-k retrieval while avoiding its drawbacks.