Variational Uncertainty Decomposition for In-Context Learning
Jayasekera, I. Shavindra, Si, Jacob, Chen, Wenlong, Valdettaro, Filippo, Faisal, A. Aldo, Li, Yingzhen
As large language models (LLMs) gain popularity in conducting prediction tasks in-context, understanding the sources of uncertainty in in-context learning becomes essential to ensuring reliability. The recent hypothesis of in-context learning performing predictive Bayesian inference opens the avenue for Bayesian uncertainty estimation, particularly for decomposing uncertainty into epistemic uncertainty due to lack of in-context data and aleatoric uncertainty inherent in the in-context prediction task. However, the decomposition idea remains under-explored due to the intractability of the latent parameter posterior from the underlying Bayesian model. In this work, we introduce a variational uncertainty decomposition framework for in-context learning without explicitly sampling from the latent parameter posterior, by optimising auxiliary queries as probes to obtain an upper bound to the aleatoric uncertainty of an LLM's in-context learning procedure, which also induces a lower bound to the epistemic uncertainty. Through experiments on synthetic and real-world tasks, we show quantitatively and qualitatively that the decomposed uncertainties obtained from our method exhibit desirable properties of epistemic and aleatoric uncertainty.
Sep-3-2025
- Country:
- North America > United States > Minnesota (0.27)
- Genre:
- Research Report > New Finding (0.93)
- Industry:
- Education (1.00)
- Energy > Oil & Gas (0.45)
- Government > Regional Government
- Technology: