Knowledge-Aware Self-Correction in Language Models via Structured Memory Graphs
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) are powerful yet prone to generating factual errors, commonly referred to as hallucinations. We present a lightweight, interpretable framework for knowledge-aware self-correction of LLM outputs using structured memory graphs based on RDF triples. Without retraining or fine-tuning, our method post-processes model outputs and corrects factual inconsistencies via external semantic memory. We demonstrate the approach using DistilGPT-2 and show promising results on simple factual prompts.
arXiv.org Artificial Intelligence
Jul-8-2025
- Country:
- Asia
- Europe
- Switzerland (0.04)
- United Kingdom > England (0.04)
- North America > United States
- Mississippi (0.04)
- New York (0.05)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Health & Medicine (0.48)
- Technology: