Pseudo-Relevance Feedback Can Improve Zero-Shot LLM-Based Dense Retrieval
Li, Hang, Wang, Xiao, Koopman, Bevan, Zuccon, Guido
–arXiv.org Artificial Intelligence
Recent advances in language modelling have been motivated the Pseudo-relevance feedback (PRF) refines queries by leveraging initially replacement of encoder-only backbones like BERT with larger retrieved documents to improve retrieval effectiveness. In this decoder-only backbones (generative LLMs) to form dense representations paper, we investigate how large language models (LLMs) can facilitate [2, 13, 23], allowing to leverage richer contextual information PRF for zero-shot LLM-based dense retrieval, extending the and enhancing dense retrieval generalization. Of particular recently proposed PromptReps method. Specifically, our approach interest for this paper is PromptReps [23], an LLM-based approach uses LLMs to extract salient passage features--such as keywords for dense retrieval. PromptReps is unique in that it does not require and summaries--from top-ranked documents, which are then integrated contrastive learning, producing effective representations for dense into PromptReps to produce enhanced query representations.
arXiv.org Artificial Intelligence
Mar-19-2025
- Country:
- Asia (0.68)
- North America > United States
- New York (0.14)
- Oceania > Australia
- Queensland (0.14)
- Genre:
- Research Report > New Finding (0.93)
- Technology: