Self-Disclosure to AI: The Paradox of Trust and Vulnerability in Human-Machine Interactions
–arXiv.org Artificial Intelligence
In this paper, we explore the paradox of trust and vulnerability in human-machine interactions, inspired by Alexander Reben's BlabDroid project. This project used small, unassuming robots that actively engaged with people, successfully eliciting personal thoughts or secrets from individuals, often more effectively than human counterparts. This phenomenon raises intriguing questions about how trust and self-disclosure operate in interactions with machines, even in their simplest forms. We study the change of trust in technology through analyzing the psychological processes behind such encounters. The analysis applies theories like Social Penetration Theory and Communication Privacy Management Theory to understand the balance between perceived security and the risk of exposure when personal information and secrets are shared with machines or AI. Additionally, we draw on philosophical perspectives, such as posthumanism and phenomenology, to engage with broader questions about trust, privacy, and vulnerability in the digital age. Rapid incorporation of AI into our most private areas challenges us to rethink and redefine our ethical responsibilities.
arXiv.org Artificial Intelligence
Dec-29-2024
- Country:
- Europe
- Austria > Vienna (0.14)
- Germany > Saarland
- Saarbrücken (0.04)
- United Kingdom (0.04)
- North America > United States (0.04)
- South America > Argentina
- Patagonia > Río Negro Province > Viedma (0.04)
- Europe
- Genre:
- Research Report (0.40)
- Industry:
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science (0.68)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning (0.93)
- Natural Language (0.68)
- Representation & Reasoning > Personal Assistant Systems (0.68)
- Robots (0.89)
- Human Computer Interaction (0.93)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology