Goto

Collaborating Authors

 latency






Efficient Contextual LLM Cascades through Budget-Constrained Policy Learning

Neural Information Processing Systems

Recent successes in natural language processing have led to the proliferation of large language models (LLMs) by multiple providers. Each LLM offering has different inference accuracy, monetary cost, and latency, and their accuracy further depends on the exact wording of the question ( i .