Aligning LLMs on a Budget: Inference-Time Alignment with Heuristic Reward Models
Nakamura, Mason, Mahmud, Saaduddin, Wray, Kyle H., Zamani, Hamed, Zilberstein, Shlomo
–arXiv.org Artificial Intelligence
Aligning LLMs with user preferences is crucial for real-world use but often requires costly fine-tuning or expensive inference, forcing trade-offs between alignment quality and computational cost. Existing inference-time methods typically ignore this balance, focusing solely on the optimized policy's performance. We propose HIA (Heuristic-Guided Inference-time Alignment), a tuning-free, black-box-compatible approach that uses a lightweight prompt optimizer, heuristic reward models, and two-stage filtering to reduce inference calls while preserving alignment quality. On real-world prompt datasets, HelpSteer and ComPRed, HIA outperforms best-of-N sampling, beam search, and greedy search baselines in multi-objective, goal-conditioned tasks under the same inference budget. We also find that HIA is effective under low-inference budgets with as little as one or two response queries, offering a practical solution for scalable, personalized LLM deployment.
arXiv.org Artificial Intelligence
Aug-8-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Massachusetts > Hampshire County > Amherst (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.64)
- Workflow (0.46)
- Technology: