Query-Focused Retrieval Heads Improve Long-Context Reasoning and Re-ranking
Zhang, Wuwei, Yin, Fangcong, Yen, Howard, Chen, Danqi, Ye, Xi
–arXiv.org Artificial Intelligence
Recent work has identified retrieval heads, a subset of attention heads responsible for retrieving salient information in long-context language models (LMs), as measured by their copy-paste behavior in Needlein-a-Haystack tasks. In this paper, we introduce QRHead (Query-Focused Retrieval Head), an improved set of attention heads that enhance retrieval from long context. We identify QRHead by aggregating attention scores with respect to the input query, using a handful of examples from real-world tasks (e.g., long-context QA). We further introduce QRRetriever, an efficient and effective retriever that uses the accumulated attention mass of QRHead as retrieval scores. We use QRRetriever for long-context reasoning by selecting the most relevant parts with the highest retrieval scores. On multi-hop reasoning tasks LongMemEval and CLIPPER, this yields over 10% performance gains over full context and outperforms strong dense retrievers. We also evaluate QRRetriever as a re-ranker on the BEIR benchmark and find that it achieves strong zero-shot performance, outperforming other LLM-based re-rankers such as RankGPT. Further analysis shows that both the query-context attention scoring and task selection are crucial for identifying QRHead with strong downstream utility. Overall, our work contributes a general-purpose retriever and offers interpretability insights into the long-context capabilities of LMs.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Asia > Myanmar
- Tanintharyi Region > Dawei (0.04)
- Europe > Ireland
- Leinster > County Dublin > Dublin (0.04)
- North America > United States
- Texas > Travis County > Austin (0.04)
- Asia > Myanmar
- Genre:
- Research Report (0.82)
- Technology: