Cross-Layer Attention Probing for Fine-Grained Hallucination Detection
Suresh, Malavika, Aljundi, Rahaf, Nkisi-Orji, Ikechukwu, Wiratunga, Nirmalie
–arXiv.org Artificial Intelligence
With the large-scale adoption of Large Language Models (LLMs) in various applications, there is a growing reliability concern due to their tendency to generate inaccurate text, i.e. hallucinations. In this work, we propose Cross-Layer Attention Probing (CLAP), a novel activation probing technique for hallucination detection, which processes the LLM activations across the entire residual stream as a joint sequence. Our empirical evaluations using five LLMs and three tasks show that CLAP improves hallucination detection compared to baselines on both greedy decoded responses as well as responses sampled at higher temperatures, thus enabling fine-grained detection, i.e. the ability to disambiguate hallucinations and non-hallucinations among different sampled responses to a given prompt. This allows us to propose a detect-then-mitigate strategy using CLAP to reduce hallucinations and improve LLM reliability compared to direct mitigation approaches. Finally, we show that CLAP maintains high reliability even when applied out-of-distribution.
arXiv.org Artificial Intelligence
Sep-15-2025
- Country:
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > UAE
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Italy
- Emilia-Romagna > Metropolitan City of Bologna
- Bologna (0.04)
- Tuscany > Florence (0.04)
- Emilia-Romagna > Metropolitan City of Bologna
- United Kingdom > Scotland
- City of Aberdeen > Aberdeen (0.04)
- Belgium > Brussels-Capital Region
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: