RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs
–Neural Information Processing Systems
Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel instruction fine-tuning framework RankRAG, which instruction-tunes a single LLM for the dual purpose of context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including GPT-4-0613, GPT-4-turbo-2024-0409, and ChatQA-1.5,
Neural Information Processing Systems
Mar-27-2025, 11:07:36 GMT
- Country:
- Europe
- Switzerland > Zürich
- Zürich (0.14)
- United Kingdom > England (0.14)
- Switzerland > Zürich
- North America (1.00)
- Europe
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Education (1.00)
- Information Technology (0.68)
- Leisure & Entertainment > Sports
- Soccer (1.00)
- Media > Music (0.69)
- Technology: