Towards Mitigating Hallucination in Large Language Models via Self-Reflection

Open in new window