Beyond Turing Test: Can GPT-4 Sway Experts' Decisions?
Takayanagi, Takehiro, Takamura, Hiroya, Izumi, Kiyoshi, Chen, Chung-Chi
–arXiv.org Artificial Intelligence
In the post-Turing era, evaluating large language models (LLMs) involves assessing generated text based on readers' reactions rather than merely its indistinguishability from human-produced content. This paper explores how LLM-generated text impacts readers' decisions, focusing on both amateur and expert audiences. Our findings indicate that GPT-4 can generate persuasive analyses affecting the decisions of both amateurs and professionals. Furthermore, we evaluate the generated text from the aspects of grammar, convincingness, logical coherence, and usefulness. The results highlight a high correlation between real-world evaluation through audience reactions and the current multi-dimensional evaluators commonly used for generative models. Overall, this paper shows the potential and risk of using generated text to sway human decisions and also points out a new direction for evaluating generated text, i.e., leveraging the reactions and decisions of readers. We release our dataset to assist future research.
arXiv.org Artificial Intelligence
Nov-25-2024
- Country:
- Asia
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Singapore (0.04)
- Japan > Honshū
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Italy (0.04)
- Ireland > Leinster
- North America > United States
- Hawaii > Honolulu County
- Honolulu (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New Mexico > Santa Fe County
- Santa Fe (0.04)
- New York > New York County
- New York City (0.04)
- Hawaii > Honolulu County
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Banking & Finance > Trading (0.93)
- Technology: