Inference-Aware Prompt Optimization for Aligning Black-Box Large Language Models
Mahmud, Saaduddin, Nakamura, Mason, Wray, Kyle H., Zilberstein, Shlomo
–arXiv.org Artificial Intelligence
Prompt optimization methods have demonstrated significant effectiveness in aligning black-box large language models (LLMs). In parallel, inference scaling strategies such as Best-of-N Sampling and Majority Voting have also proven to enhance alignment and performance by trading off computation. However, existing prompt optimization approaches are inference strategy agnostic; that is, they optimize prompts without regard to the inference strategy employed during deployment. This constitutes a significant methodological gap, as our empirical and theoretical analysis reveals a strong interdependence between these two paradigms. Moreover, we find that user preferences regarding trade-offs among multiple objectives and inference budgets substantially influence the choice of prompt and inference configuration. To address this gap, we introduce a unified novel framework named IAPO (Inference-Aware Prompt Optimization) that jointly optimizes the prompt and inference scale, while being aware of the inference budget and different task objectives. We then develop a fixed-budget training algorithm for IAPO, which we call PSST (Prompt Scaling via Sequential Trimming), and analyze finite-budget guarantees on error probability. Finally, we evaluate the effectiveness of PSST on six different tasks, including multi-objective text generation and reasoning, and demonstrate the critical role of incorporating inference-awareness when aligning black-box LLMs through prompt optimization.
arXiv.org Artificial Intelligence
Aug-15-2025
- Country:
- North America > United States > Massachusetts > Hampshire County > Amherst (0.04)
- Genre:
- Research Report (1.00)
- Industry:
- Transportation > Air (0.81)
- Technology: