Fundamental Risks in the Current Deployment of General-Purpose AI Models: What Have We (Not) Learnt From Cybersecurity?
–arXiv.org Artificial Intelligence
Fundamental Risks in the Current Deployment of General-Purpose AI Models: What Have We (Not) Learnt From Cybersecurity? General Purpose AI - such as Large Language Models (LLMs) - have seen rapid deployment in a wide range of use cases. Most surprisingly, they have have made their way from plain language models, to chat-bots, all the way to an almost "operating system"-like status that can control decisions and logic of an application. Tool-use, Microsoft co-pilot/office integration, and OpenAIs Altera are just a few examples of increased autonomy, data access, and execution capabilities. Unfortunately, it turns out that the current technology is vulnerable to attacks like prompt and in-direct prompt injection. This means that a message sent to the AI by a user or even an attacker injecting a message into the AI, can alter the behavior and lead to malicious and harmful outcomes.
arXiv.org Artificial Intelligence
Dec-19-2024
- Country:
- Europe (0.32)
- North America > United States
- California > San Francisco County > San Francisco (0.15)
- Genre:
- Research Report (0.40)
- Industry:
- Government > Military
- Cyberwarfare (0.65)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: