Goto

Collaborating Authors

 dir



SeSE: A Structural Information-Guided Uncertainty Quantification Framework for Hallucination Detection in LLMs

Zhao, Xingtao, Peng, Hao, Su, Dingli, Zeng, Xianghua, Liu, Chunyang, Liao, Jinzhi, Yu, Philip S.

arXiv.org Artificial Intelligence

Reliable uncertainty quantification (UQ) is essential for deploying large language models (LLMs) in safety-critical scenarios, as it enables them to abstain from responding when uncertain, thereby avoiding ``hallucinating'' falsehoods. However, state-of-the-art UQ methods primarily rely on semantic probability distributions or pairwise distances, overlooking latent semantic structural information that could enable more precise uncertainty estimates. This paper presents Semantic Structural Entropy (SeSE), a principled UQ framework that quantifies the inherent semantic uncertainty of LLMs from a structural information perspective for hallucination detection. SeSE operates in a zero-resource manner and is applicable to both open- and closed-source LLMs, making it an ``off-the-shelf" solution for new models and tasks. Specifically, to effectively model semantic spaces, we first develop an adaptively sparsified directed semantic graph construction algorithm that captures directional semantic dependencies while automatically pruning unnecessary connections that introduce negative interference. We then exploit latent semantic structural information through hierarchical abstraction: SeSE is defined as the structural entropy of the optimal semantic encoding tree, formalizing intrinsic uncertainty within semantic spaces after optimal compression. A higher SeSE value corresponds to greater uncertainty, indicating that LLMs are highly likely to generate hallucinations. In addition, to enhance fine-grained UQ in long-form generation, we extend SeSE to quantify the uncertainty of individual claims by modeling their random semantic interactions, providing theoretically explicable hallucination detection. Extensive experiments across 29 model-dataset combinations show that SeSE significantly outperforms advanced UQ baselines.



c3177be226ee12e34d6ba3b5e6fe6a5b-Paper-Conference.pdf

Neural Information Processing Systems

This paper questions the effectiveness of a modern predictive uncertainty quantification approach, called evidential deep learning (EDL), in which a single neural network model is trained to learn a meta distribution over the predictive distribution by minimizing a specific objective function.




Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces

Neural Information Processing Systems

Many problems in machine learning reduce to learning a probability distribution (or policy) over sequences of discrete actions so as to maximize a downstream utility function. Examples include generating text sequences to maximize a task-specific metric like BLEU and generating action sequences in reinforcement learning (RL) to maximize expected return.




We would like to thank the reviewers for their valuable feedback, which we will duly consider and integrate in our

Neural Information Processing Systems

In this paper, we demonstrate that "the decision boundaries of a DNN can only exist as long We clarify the main points raised by the reviewers here below. We further shed more light on the relationship between adv. Nevertheless, we never claim that, within the discr. In fact, we agree that the margin associated to different discr. Overall, however, we firmly believe that the invariant dirs.