ASIDE: Architectural Separation of Instructions and Data in Language Models
Zverev, Egor, Kortukov, Evgenii, Panfilov, Alexander, Tabesh, Soroush, Volkova, Alexandra, Lapuschkin, Sebastian, Samek, Wojciech, Lampert, Christoph H.
–arXiv.org Artificial Intelligence
Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Instead of training the embeddings from scratch, we propose a method to convert an existing model to ASIDE form by using two copies of the original model's embeddings layer, and applying an orthogonal rotation to one of them. We demonstrate the effectiveness of our method by showing (1) highly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations. Large language models (LLMs) are commonly associated with interactive open-ended chat applications, such as ChatGPT. However, in many practical applications LLMs are integrated as a component into larger software systems. Their rich natural language understanding abilities allow them to be used for text analysis and generation, translation, document summarization, or information retrieval (Zhao et al., 2023). In all of these scenarios, the system is given instructions, for example as a system prompt, and data, for example, a user input or an uploaded document. These two forms of input play different roles: the instruction should be executed, determining the behavior of the model. The data should be processed, i.e., transformed to become the output of the system.
arXiv.org Artificial Intelligence
Mar-13-2025
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology > Security & Privacy (0.87)
- Technology: