Zero-knowledge LLM hallucination detection and mitigation through fine-grained cross-model consistency

Open in new window