Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Xu, Yunzhe, Zhang, Zhuosheng, Liu, Zhe
–arXiv.org Artificial Intelligence
Abstract--While prompt optimization has emerged as a critical technique for enhancing language model performance, existing approaches primarily focus on elicitation-based strategies that search for optimal prompts to activate models' capabilities. These methods exhibit fundamental limitations when addressing knowledge-intensive tasks, as they operate within fixed parametric boundaries rather than providing the factual knowledge, terminology precision, and reasoning patterns required in specialized domains. T o address these limitations, we propose Knowledge-Provision-based Prompt Optimization (KPPO), a framework that reformulates prompt optimization as systematic knowledge integration rather than potential elicitation. KPPO introduces three key innovations: 1) a knowledge gap filling mechanism for knowledge gap identification and targeted remediation; 2) a batch-wise candidate evaluation approach that considers both performance improvement and distributional stability; 3) an adaptive knowledge pruning strategy that balances performance and token efficiency, reducing up to 29% token usage. Extensive evaluation on 15 knowledge-intensive benchmarks from various domains demonstrates KPPO's superiority over elicitation-based methods, with an average performance improvement of ~6% over the strongest baseline while achieving comparable or lower token consumption. Large Language Models (LLMs) have achieved unprecedented performance across diverse natural language processing tasks through sophisticated prompt engineering techniques [1]. The field has evolved from manual prompt design approaches [2], [3] to automated optimization frameworks [4]- [7], where optimizer LLMs iteratively refine prompts to maximize task performance. These automated approaches, collectively termed elicitation-based optimization, operate under the fundamental assumption that optimal prompts can unlock latent capabilities within pre-trained model parameters through strategic reformulation of instructions, exemplars, or reasoning templates.
arXiv.org Artificial Intelligence
Nov-14-2025