Mitigating Hallucinations of Large Language Models via Knowledge Consistent Alignment

Open in new window