To Believe or Not to Believe Y our LLM: Iterative Prompting for Estimating Epistemic Uncertainty
–Neural Information Processing Systems
We explore uncertainty quantification in large language models (LLMs), with the goal to identify when uncertainty in responses given a query is large. We simultaneously consider both epistemic and aleatoric uncertainties, where the former comes from the lack of knowledge about the ground truth (such as about facts or the language), and the latter comes from irreducible randomness (such as multiple possible answers).
Neural Information Processing Systems
Feb-15-2026, 14:49:11 GMT
- Country:
- Asia
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Russia (0.04)
- Myanmar > Tanintharyi Region
- Europe
- Ireland (0.04)
- Russia (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.14)
- Greater London > London (0.04)
- North America
- Canada > Alberta (0.14)
- United States (0.14)
- Asia
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.67)
- Research Report
- Industry:
- Government (0.45)