Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health
Kwesi, Jabari, Cao, Jiaxun, Manchanda, Riya, Emami-Naeini, Pardis
–arXiv.org Artificial Intelligence
Individuals are increasingly relying on large language model (LLM)-enabled conversational agents for emotional support. While prior research has examined privacy and security issues in chatbots specifically designed for mental health purposes, these chatbots are overwhelmingly "rule-based" offerings that do not leverage generative AI. Little empirical research currently measures users' privacy and security concerns, attitudes, and expectations when using general-purpose LLM-enabled chatbots to manage and improve mental health. Through 21 semi-structured interviews with U.S. participants, we identified critical misconceptions and a general lack of risk awareness. Participants conflated the human-like empathy exhibited by LLMs with human-like accountability and mistakenly believed that their interactions with these chatbots were safeguarded by the same regulations (e.g., HIPAA) as disclosures with a licensed therapist. We introduce the concept of "intangible vulnerability," where emotional or psychological disclosures are undervalued compared to more tangible forms of information (e.g., financial or location-based data). To address this, we propose recommendations to safeguard user mental health disclosures with general-purpose LLM-enabled chatbots more effectively.
arXiv.org Artificial Intelligence
Jul-16-2025
- Country:
- Asia > Russia (0.04)
- Europe
- France (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Russia > Northwestern Federal District
- Leningrad Oblast > Saint Petersburg (0.04)
- North America > United States
- California (0.04)
- Genre:
- Personal > Interview (1.00)
- Questionnaire & Opinion Survey (0.88)
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Strength High (0.93)
- Industry:
- Technology: