LLM-Check: Investigating Detection of Hallucinations in Large Language Models

Open in new window