LLMQuoter: Enhancing RAG Capabilities Through Efficient Quote Extraction From Large Contexts
Bezerra, Yuri Facanha, Weigang, Li
–arXiv.org Artificial Intelligence
We introduce LLMQuoter, a lightweight, distillation-based model designed to enhance Retrieval Augmented Generation (RAG) by extracting the most relevant textual evidence for downstream reasoning tasks. Built on the LLaMA-3B architecture and fine-tuned with Low-Rank Adaptation (LoRA) on a 15,000-sample subset of HotpotQA, LLMQuoter adopts a "quote-first-then-answer" strategy, efficiently identifying key quotes before passing curated snippets to reasoning models. This workflow reduces cognitive overhead and outperforms full-context approaches like Retrieval-Augmented Fine-Tuning (RAFT), achieving over 20-point accuracy gains across both small and large language models. By leveraging knowledge distillation from a high-performing teacher model, LLMQuoter achieves competitive results in a resource-efficient fine-tuning setup. It democratizes advanced RAG capabilities, delivering significant performance improvements without requiring extensive model retraining. Our results highlight the potential of distilled quote-based reasoning to streamline complex workflows, offering a scalable and practical solution for researchers and practitioners alike.
arXiv.org Artificial Intelligence
Jan-9-2025
- Country:
- Asia
- China
- Jiangsu Province
- Lianyungang (0.04)
- Xuzhou (0.04)
- Shanghai > Shanghai (0.04)
- Jiangsu Province
- Middle East > Republic of Türkiye
- Denizli Province > Denizli (0.04)
- China
- North America
- Canada > Quebec
- Capitale-Nationale Region
- Quebec City (0.04)
- Québec (0.04)
- Capitale-Nationale Region
- United States
- Maine (0.04)
- Massachusetts (0.04)
- New Hampshire (0.04)
- Canada > Quebec
- South America > Brazil
- Federal District > Brasília (0.04)
- Asia
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Film (1.00)
- Transportation
- Technology: