Mechanistic Behavior Editing of Language Models

Singh, Joykirat, Dutta, Subhabrata, Chakraborty, Tanmoy

arXiv.org Artificial Intelligence 

Large Language Models trained on web-scale text acquire language generation abilities that can solve a wide range of tasks, particularly when task knowledge is refined into the generative prior using in-context examples. However, spurious features learned from noisy data hinder their generalizability. Supervised finetuning can introduce task specificity, but introduce data inefficiency. Prior studies indicate that (i) noisy neural circuitries coexist with generalizable ones within LLMs, and (ii) finetuning typically enhances (or suppresses) existing abilities without introducing newer ones. Building upon these, we propose TaRot, a novel method for task adaptation. TaRot intervenes in the neural circuitries using learnable rotation matrices that are optimized using Bayesian Optimization, on labelled samples in the order of standard few-shot prompting examples. Experiments on multiple classification and generation tasks using LLMs of varying sizes reveal the efficacy of TaRot, improving upon both zero-as well as few-shot performance, with average improvements (across models and tasks) of 23.81% and 11.15%, respectively. Large Language Models (LLMs) acquire the ability to associate different language concepts presented in a sequential context by optimizing the prediction probability of the next token given a context. Despite its apparent simplicity, when scaled across web-sized text corpora, such a learning strategy introduces the ability to solve a wide range of tasks presented in natural language.