Manipulation and the AI Act: Large Language Model Chatbots and the Danger of Mirrors
–arXiv.org Artificial Intelligence
Large Language Model chatbots are increasingly taking the form and visage of human beings, adapting human faces, names, voices, personalities, and quirks, including those of celebrities and well-known political figures. Personifying AI chatbots could foreseeably increase their trust with users. However, it could also make them more capable of manipulation, by creating the illusion of a close and intimate relationship with an artificial entity. The European Commission has finalized the AI Act, with the EU Parliament making amendments banning manipulative and deceptive AI systems that cause significant harm to users. Although the AI Act covers harms that accumulate over time, it is unlikely to prevent harms associated with prolonged discussions with AI chatbots. Specifically, a chatbot could reinforce a person's negative emotional state over weeks, months, or years through negative feedback loops, prolonged conversations, or harmful recommendations, contributing to a user's deteriorating mental health.
arXiv.org Artificial Intelligence
Mar-24-2025
- Country:
- Europe (1.00)
- North America > United States
- Indiana > Tippecanoe County (0.14)
- New York (0.46)
- Genre:
- Research Report (1.00)
- Industry:
- Consumer Products & Services (0.93)
- Education (1.00)
- Government > Regional Government
- Europe Government (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology (1.00)
- Information Technology
- Security & Privacy (1.00)
- Services (0.93)
- Law (1.00)
- Media (1.00)
- Technology: