Do Language Models Know When They're Hallucinating References?

Open in new window