Can Humans Oversee Agents to Prevent Privacy Leakage? A Study on Privacy Awareness, Preferences, and Trust in Language Model Agents
Zhang, Zhiping, Guo, Bingcan, Li, Tianshi
–arXiv.org Artificial Intelligence
Language model (LM) agents that act on users' behalf for personal tasks can boost productivity, but are also susceptible to unintended privacy leakage risks. We present the first study on people's capacity to oversee the privacy implications of the LM agents. By conducting a task-based survey (N=300), we investigate how people react to and assess the response generated by LM agents for asynchronous interpersonal communication tasks, compared with a response they wrote. We found that people may favor the agent response with more privacy leakage over the response they drafted or consider both good, leading to an increased harmful disclosure from 15.7% to 55.0%. We further uncovered distinct patterns of privacy behaviors, attitudes, and preferences, and the nuanced interactions between privacy considerations and other factors. Our findings shed light on designing agentic systems that enable privacy-preserving interactions and achieve bidirectional alignment on privacy preferences to help users calibrate trust.
arXiv.org Artificial Intelligence
Nov-2-2024
- Genre:
- Overview (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology:
- Information Technology
- Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.46)
- Statistical Learning > Clustering (0.68)
- Natural Language
- Chatbot (1.00)
- Large Language Model (0.94)
- Representation & Reasoning > Agents (0.69)
- Machine Learning
- Communications > Social Media (1.00)
- Data Science > Data Mining (1.00)
- Security & Privacy (1.00)
- Artificial Intelligence
- Information Technology