Hallucination Detection in LLMs Using Spectral Features of Attention Maps
Binkowski, Jakub, Janiak, Denis, Sawczyn, Albert, Gabrys, Bogdan, Kajdanowicz, Tomasz
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) have demonstrated remarkable performance across various tasks but remain prone to hallucinations. Detecting hallucinations is essential for safety-critical applications, and recent methods leverage attention map properties to this end, though their effectiveness remains limited. In this work, we investigate the spectral features of attention maps by interpreting them as adjacency matrices of graph structures. We propose the $\text{LapEigvals}$ method, which utilises the top-$k$ eigenvalues of the Laplacian matrix derived from the attention maps as an input to hallucination detection probes. Empirical evaluations demonstrate that our approach achieves state-of-the-art hallucination detection performance among attention-based methods. Extensive ablation studies further highlight the robustness and generalisation of $\text{LapEigvals}$, paving the way for future advancements in the hallucination detection domain.
arXiv.org Artificial Intelligence
Feb-24-2025
- Country:
- Asia (0.67)
- Europe (1.00)
- North America
- Mexico > Mexico City (0.14)
- United States (0.93)
- Genre:
- Research Report
- Experimental Study (0.68)
- New Finding (1.00)
- Research Report
- Industry:
- Government > Regional Government (0.67)
- Technology: