To Believe or Not to Believe Y our LLM: Iterative Prompting for Estimating Epistemic Uncertainty

Neural Information Processing Systems 

We explore uncertainty quantification in large language models (LLMs), with the goal to identify when uncertainty in responses given a query is large. We simultaneously consider both epistemic and aleatoric uncertainties, where the former comes from the lack of knowledge about the ground truth (such as about facts or the language), and the latter comes from irreducible randomness (such as multiple possible answers).

Similar Docs  Excel Report  more

TitleSimilaritySource
None found