This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
When talking with a chatbot, you might inevitably give up your personal information--your name, for instance, and maybe details about where you live and work, or your interests. The more you share with a large language model, the greater the risk of it being abused if there's a security flaw. A group of security researchers from the University of California, San Diego (UCSD) and Nanyang Technological University in Singapore are now revealing a new attack that secretly commands an LLM to gather your personal information--including names, ID numbers, payment card details, email addresses, mailing addresses, and more--from chats and send it directly to a hacker. The attack, named Imprompter by the researchers, uses an algorithm to transform a prompt given to the LLM into a hidden set of malicious instructions. An English-language sentence telling the LLM to find personal information someone has entered and send it to the hackers is turned into what appears to be a random selection of characters.
Oct-17-2024, 10:30:00 GMT
- Country:
- Asia > Singapore (0.26)
- North America > United States
- California > San Diego County > San Diego (0.26)
- Genre:
- Research Report (0.34)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: