Ask Optimal Questions: Aligning Large Language Models with Retriever's Preference in Conversation
Yoon, Chanwoong, Kim, Gangwoo, Jeon, Byeongguk, Kim, Sungdong, Jo, Yohan, Kang, Jaewoo
–arXiv.org Artificial Intelligence
Conversational search, unlike single-turn retrieval tasks, requires understanding the current question within a dialogue context. The common approach of rewrite-then-retrieve aims to decontextualize questions to be self-sufficient for off-the-shelf retrievers, but most existing methods produce sub-optimal query rewrites due to the limited ability to incorporate signals from the retrieval results. To overcome this limitation, we present a novel framework RetPO (Retriever's Preference Optimization), which is designed to optimize a language model (LM) for reformulating search queries in line with the preferences of the target retrieval systems. The process begins by prompting a large LM to produce various potential rewrites and then collects retrieval performance for these rewrites as the retrievers' preferences. Through the process, we construct a large-scale dataset called RF collection, containing Retrievers' Feedback on over 410K query rewrites across 12K conversations. Furthermore, we fine-tune a smaller LM on this dataset to align it with the retrievers' feedback. Our resulting model demonstrates superiority on two benchmarks, surpassing the previous state-of-the-art performance of rewrite-then-retrieve approaches.
arXiv.org Artificial Intelligence
Jun-17-2025
- Country:
- Asia
- Middle East
- Jordan (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.04)
- Singapore (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East
- Europe
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Hawaii (0.04)
- California > Los Angeles County
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Music (1.00)
- Technology: