HalluCana: Fixing LLM Hallucination with A Canary Lookahead
Li, Tianyi, Dayanik, Erenay, Tyagi, Shubhi, Pierleoni, Andrea
–arXiv.org Artificial Intelligence
In this paper, we present HalluCana, a canary lookahead to detect and correct factuality hallucinations of Large Language Models (LLMs) in long-form generation. HalluCana detects and intervenes as soon as traces of hallucination emerge, during and even before generation. To support timely detection, we exploit the internal factuality representation in the LLM hidden space, where we investigate various proxies to the LLMs' factuality self-assessment, and discuss its relation to the models' context familiarity from their pre-training. On biography generation, our method improves generation quality by up to 2.5x, while consuming over 6 times less compute.
arXiv.org Artificial Intelligence
Dec-10-2024
- Country:
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > UAE
- Europe
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- France (0.05)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Greater London > London (0.04)
- Croatia > Dubrovnik-Neretva County
- North America
- Canada
- British Columbia > Metro Vancouver Regional District
- Vancouver (0.04)
- Ontario > Toronto (0.04)
- British Columbia > Metro Vancouver Regional District
- United States
- New York > New York County
- New York City (0.04)
- Washington > King County
- Seattle (0.14)
- New York > New York County
- Canada
- Oceania > Australia (0.04)
- South America > Argentina (0.04)
- Asia
- Genre:
- Personal (0.46)
- Research Report (0.50)
- Industry:
- Government > Regional Government (0.67)
- Leisure & Entertainment (1.00)
- Media > Music (0.93)
- Technology: