Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation

Open in new window