"It's a Fair Game'', or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
Zhang, Zhiping, Jia, Michelle, Hao-Ping, null, Lee, null, Yao, Bingsheng, Das, Sauvik, Lerner, Ada, Wang, Dakuo, Li, Tianshi
–arXiv.org Artificial Intelligence
The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigmatic shifts to protect the privacy of LLM-based CA users.
arXiv.org Artificial Intelligence
Sep-20-2023
- Country:
- Europe
- France (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- North America
- Montserrat (0.04)
- United States
- Massachusetts > Suffolk County
- Boston (0.04)
- New York > Rensselaer County
- Troy (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.14)
- Massachusetts > Suffolk County
- Europe
- Genre:
- Personal > Interview (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Industry:
- Education (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology (0.46)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology: