Mind What You Ask For: Emotional and Rational Faces of Persuasion by Large Language Models
Mieleszczenko-Kowszewicz, Wiktoria, Bajcar, Beata, Babiak, Jolanta, Dyczek, Berenika, Świstak, Jakub, Biecek, Przemysław
–arXiv.org Artificial Intelligence
Be careful what you ask for, you just might get it. This saying fits with the way large language models (LLMs) are trained, which, instead of being rewarded for correctness, are increasingly rewarded for pleasing the recipient. So, they are increasingly effective at persuading us that their answers are valuable. But what tricks do they use in this persuasion? In this study, we examine what are the psycholinguistic features of the responses used by twelve different language models. By grouping response content according to rational or emotional prompts and exploring social influence principles employed by LLMs, we ask whether and how we can mitigate the risks of LLM-driven mass misinformation. We position this study within the broader discourse on human-centred AI, emphasizing the need for interdisciplinary approaches to mitigate cognitive and societal risks posed by persuasive AI responses.
arXiv.org Artificial Intelligence
Feb-13-2025
- Country:
- North America > United States > Texas (0.14)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Government (0.93)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology > Mental Health (0.46)
- Technology: