Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation

Open in new window