MHA-RAG: Improving Efficiency, Accuracy, and Consistency by Encoding Exemplars as Soft Prompts
Jain, Abhinav, Yao, Xinyu, Reps, Thomas, Jermaine, Christopher
–arXiv.org Artificial Intelligence
Adapting Foundation Models to new domains with limited training data is challenging and computationally expensive. While prior work has demonstrated the effectiveness of using domain-specific exemplars as in-context demonstrations, we investigate whether representing exemplars purely as text is the most efficient, effective, and stable approach. We explore an alternative: representing exemplars as soft prompts with an exemplar order invariant model architecture. To this end, we introduce Multi-Head Attention Retrieval-Augmented Generation (MHA-RAG), a framework with the number of attention heads serving as a simple hyperparameter to control soft prompt-generation across different tasks. Across multiple question-answering benchmarks and model scales, MHA-RAG achieves a 20-point performance gain over standard RAG, while cutting inference costs by a factor of 10X GFLOPs-delivering both higher accuracy and greater efficiency, invariant to exemplar order.
arXiv.org Artificial Intelligence
Oct-8-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Wisconsin > Dane County > Madison (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: