Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning
–Neural Information Processing Systems
Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question (i.e., the specific prompt). At the same time, users often have a limit on monetary budget and latency to answer all their questions, and they do not know which LLMs to choose for each question to meet their accuracy and long term budget requirements.
Neural Information Processing Systems
Mar-26-2025, 15:16:46 GMT
- Country:
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (0.92)
- Research Report
- Industry:
- Information Technology (0.46)
- Technology: