PrismRAG: Boosting RAG Factuality with Distractor Resilience and Strategized Reasoning
Kachuee, Mohammad, Gollapudi, Teja, Kim, Minseok, Huang, Yin, Sun, Kai, Yang, Xiao, Wang, Jiaqi, Shah, Nirav, Liu, Yue, Colak, Aaron, Kumar, Anuj, Yih, Wen-tau, Dong, Xin Luna
–arXiv.org Artificial Intelligence
Retrieval-augmented generation (RAG) often falls short when retrieved context includes confusing semi-relevant passages, or when answering questions require deep contextual understanding and reasoning. We propose an efficient fine-tuning framework, called PrismRAG, that (i) trains the model with distractor-aware QA pairs mixing gold evidence with subtle distractor passages, and (ii) instills reasoning-centric habits that make the LLM plan, rationalize, and synthesize without relying on extensive human engineered instructions. Evaluated across 12 open-book RAG QA benchmarks spanning diverse application domains and scenarios, PrismRAG improves average factuality by 5.4%, outperforming state-of-the-art solutions.
arXiv.org Artificial Intelligence
Jul-28-2025
- Country:
- Asia > China (0.04)
- Europe > France (0.04)
- North America
- Canada (0.04)
- United States
- California
- San Francisco County > San Francisco (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- District of Columbia > Washington (0.04)
- Florida > Collier County
- Naples (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- California
- South America > Venezuela (0.04)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Leisure & Entertainment > Sports (0.46)
- Media (0.68)
- Technology: