Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) are already as persuasive as humans. However, we know very little about how they do it. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we compare the persuasion strategies of LLM-generated and human-generated arguments through measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and morality). Our results indicate that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. The fact that we show that there is no equivalence in process despite equivalence in outcome, contributes to the emergent knowledge regarding AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through persuasion strategies.
arXiv.org Artificial Intelligence
Apr-21-2024
- Country:
- Europe > France
- Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States
- New York (0.04)
- Europe > France
- Genre:
- Research Report
- Experimental Study > Negative Result (0.66)
- New Finding (1.00)
- Research Report
- Industry:
- Government (0.93)
- Health & Medicine > Therapeutic Area (0.68)
- Technology: