Log Probability Tracking of LLM APIs
Chauvin, Timothée, Merrer, Erwan Le, Taïani, François, Tredan, Gilles
–arXiv.org Artificial Intelligence
When using an LLM through an API provider, users expect the served model to remain consistent over time, a property crucial for the reliability of downstream applications and the reproducibility of research. Existing audit methods are too costly to apply at regular time intervals to the wide range of available LLM APIs. This means that model updates are left largely unmonitored in practice. In this work, we show that while LLM log probabilities (logprobs) are usually non-deterministic, they can still be used as the basis for cost-effective continuous monitoring of LLM APIs. We apply a simple statistical test based on the average value of each token logprob, requesting only a single token of output. This is enough to detect changes as small as one step of fine-tuning, making this approach more sensitive than existing methods while being 1,000x cheaper. We introduce the TinyChange benchmark as a way to measure the sensitivity of audit methods in the context of small, realistic model changes. LLM API providers typically offer version-pinned endpoints, signaling to users that a given endpoint will serve a consistent model. Users of APIs tend to rely on this consistency: developers want to avoid unexpected regressions in their applications; researchers seek reproducibility in their experiments; regulators perform initial compliance assessments, and assume that the API will keep serving the same model afterward (Y an & Zhang, 2022).
arXiv.org Artificial Intelligence
Dec-4-2025
- Country:
- Asia
- Middle East > Iraq
- Basra Governorate > Basra (0.04)
- Singapore (0.04)
- Middle East > Iraq
- Europe > France
- Brittany > Ille-et-Vilaine
- Rennes (0.04)
- Occitanie > Haute-Garonne
- Toulouse (0.04)
- Brittany > Ille-et-Vilaine
- North America > United States (0.04)
- Asia
- Genre:
- Research Report (1.00)
- Technology: