Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing
Nishi, Kento, Okawa, Maya, Ramesh, Rahul, Khona, Mikail, Tanaka, Hidenori, Lubana, Ekdeep Singh
–arXiv.org Artificial Intelligence
Knowledge Editing (KE) algorithms alter models' weights to perform targeted updates to incorrect, outdated, or otherwise unwanted factual associations. To better identify the possibilities and limitations of these approaches, recent work has shown that applying KE can adversely affect models' factual recall accuracy and diminish their general reasoning abilities. While these studies give broad insights into the potential harms of KE algorithms, e.g., via performance evaluations on benchmarks, we argue little is understood as to why such destructive failures occur. Is it possible KE methods distort representations of concepts beyond the targeted fact, hence hampering abilities at broad? If so, what is the extent of this distortion? Motivated by such questions, we define a novel synthetic task wherein a Transformer is trained from scratch to internalize a "structured" knowledge graph. The structure enforces relationships between entities of the graph, such that editing a factual association has "trickling effects" on other entities in the graph (e.g., altering X's parent is Y to Z affects who X's siblings' parent is). Through evaluations of edited models and analysis of extracted representations, we show that KE inadvertently affects representations of entities beyond the targeted one, distorting relevant structures that allow a model to infer unseen knowledge about an entity. We call this phenomenon representation shattering and demonstrate that it results in degradation of factual recall and reasoning performance more broadly. To corroborate our findings in a more naturalistic setup, we perform preliminary experiments with pre-trained Llama and Mamba models, reproducing the representation shattering effect therein as well. Overall, our work yields a precise mechanistic hypothesis to explain why KE has adverse effects on model abilities. However, the static nature of their training pipelines implies that as our world evolves, models' internalized knowledge can become incorrect or outdated. To address this, recent work has proposed several protocols for knowledge editing (KE), wherein the goal is to minimally and precisely alter model weights such that only the targeted information (and its relevant associations) are updated, but all unrelated information remains (ideally) unaffected (Mitchell et al., 2022; Meng et al., 2022a; 2023; Dai et al., 2021; Cheng et al., 2023; De Cao et al., 2021; Sinitsin et al., 2020).
arXiv.org Artificial Intelligence
Jan-8-2025
- Country:
- Europe (1.00)
- North America > United States (0.93)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.34)
- Leisure & Entertainment > Sports
- Basketball (0.46)
- Technology: