DSCC-HS: A Dynamic Self-Reinforcing Framework for Hallucination Suppression in Large Language Models
–arXiv.org Artificial Intelligence
Large Language Model (LLM) hallucination is a significant barrier to their reliable deployment. Current methods like Retrieval-Augmented Generation (RAG) are often reactive. We introduce **Dynamic Self-reinforcing Calibration for Hallucination Suppression (DSCC-HS)**, a novel, proactive framework that intervenes during autoregressive decoding. Inspired by dual-process cognitive theory, DSCC-HS uses a compact proxy model, trained in adversarial roles as a Factual Alignment Proxy (FAP) and a Hallucination Detection Proxy (HDP). During inference, these proxies dynamically steer a large target model by injecting a real-time steering vector, which is the difference between FAP and HDP logits, at each decoding step. This plug-and-play approach requires no modification to the target model. Our experiments on TruthfulQA and BioGEN show DSCC-HS achieves state-of-the-art performance. On TruthfulQA, it reached a 99.2% Factual Consistency Rate (FCR). On the long-form BioGEN benchmark, it attained the highest FActScore of 46.50. These results validate DSCC-HS as a principled and efficient solution for enhancing LLM factuality.
arXiv.org Artificial Intelligence
Sep-18-2025
- Country:
- Asia
- Europe
- France (0.28)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States
- California
- Los Angeles County > Los Angeles (0.14)
- San Francisco County > San Francisco (0.14)
- Santa Clara County > Mountain View (0.04)
- Oregon (0.04)
- California
- Canada > Ontario
- Genre:
- Overview (0.67)
- Research Report (0.82)
- Industry:
- Health & Medicine (0.70)
- Leisure & Entertainment (0.46)
- Media > Film (0.46)
- Technology: