Semantic Density: Uncertainty Quantification for Large Language Models through Confidence Measurement in Semantic Space
–Neural Information Processing Systems
With the widespread application of Large Language Models (LLMs) to various domains, concerns regarding the trustworthiness of LLMs in safety-critical scenarios have been raised, due to their unpredictable tendency to hallucinate and generate misinformation. Existing LLMs do not have an inherent functionality to provide the users with an uncertainty/confidence metric for each response it generates, making it difficult to evaluate trustworthiness. Although several studies aim to develop uncertainty quantification methods for LLMs, they have fundamental limitations, such as being restricted to classification tasks, requiring additional training and data, considering only lexical instead of semantic information, and being prompt-wise but not response-wise. A new framework is proposed in this paper to address these issues.
Neural Information Processing Systems
Mar-27-2025, 14:52:15 GMT
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (0.67)
- North America
- Mexico > Mexico City (0.14)
- United States > Texas (0.14)
- Asia > Middle East
- Genre:
- Research Report > Experimental Study (1.00)
- Technology: