Can Large Language Models Outperform Non-Experts in Poetry Evaluation? A Comparative Study Using the Consensual Assessment Technique
Sawicki, Piotr, Grześ, Marek, Brown, Dan, Góes, Fabrício
–arXiv.org Artificial Intelligence
The Consensual Assessment Technique (CAT) evaluates creativity through holistic expert judgments. We investigate the use of two advanced Large Language Models (LLMs), Claude-3-Opus and GPT-4o, to evaluate poetry by a methodology inspired by the CAT. Using a dataset of 90 poems, we found that these LLMs can surpass the results achieved by non-expert human judges at matching a ground truth based on publication venue, particularly when assessing smaller subsets of poems. Claude-3-Opus exhibited slightly superior performance than GPT-4o. We show that LLMs are viable tools for accurately assessing poetry, paving the way for their broader application into other creative domains.
arXiv.org Artificial Intelligence
Feb-26-2025
- Country:
- Europe > United Kingdom
- England (0.14)
- North America (0.28)
- Europe > United Kingdom
- Genre:
- Research Report > New Finding (1.00)
- Technology: