LLMProxy: Reducing Cost to Access Large Language Models
Martin, Noah, Faisal, Abdullah Bin, Eltigani, Hiba, Haroon, Rukhshan, Lamelas, Swaminathan, Dogar, Fahad
–arXiv.org Artificial Intelligence
In this paper, we make a case for a proxy for large language models which has explicit support for cost-saving optimizations. We design LLMProxy, which supports three key optimizations: model selection, context management, and caching. These optimizations present tradeoffs in terms of cost, inference time, and response quality, which applications can navigate through our high level, bidirectional interface. As a case study, we implement a WhatsApp-based Q&A service that uses LLMProxy to provide a rich set of features to the users. This service is deployed on a small scale (100+ users) leveraging the cloud; it has been operational for 15+ weeks and users have asked 1400+ questions so far. We report on the experiences of running this service as well as microbenchmark the specific benefits of the various cost-optimizations we present in this paper.
arXiv.org Artificial Intelligence
Oct-4-2024
- Country:
- Africa > Sudan (0.04)
- Asia
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Singapore (0.04)
- Myanmar > Tanintharyi Region
- Europe > Montenegro (0.04)
- North America > United States
- California
- Alameda County > Oakland (0.04)
- San Diego County > Carlsbad (0.04)
- Santa Clara County > Santa Clara (0.04)
- New York > New York County
- New York City (0.04)
- California
- Genre:
- Research Report (0.64)
- Industry:
- Information Technology > Services (0.90)
- Technology: