Enhancing Hallucination Detection via Future Context
Lee, Joosung, Park, Cheonbok, Jo, Hwiyeol, Kim, Jeonghoon, Park, Joonsuk, Yoo, Kang Min
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) are widely used to generate plausible text on online platforms, without revealing the generation process. As users increasingly encounter such black-box outputs, detecting hallucinations has become a critical challenge. To address this challenge, we focus on developing a hallucination detection framework for black-box generators. Motivated by the observation that hallucinations, once introduced, tend to persist, we sample future contexts. The sampled future contexts provide valuable clues for hallucination detection and can be effectively integrated with various sampling-based methods. We extensively demonstrate performance improvements across multiple methods using our proposed sampling approach.
arXiv.org Artificial Intelligence
Jul-29-2025