The Implicit Bias of Structured State Space Models Can Be Poisoned With Clean Labels

Slutzky, Yonatan, Alexander, Yotam, Razin, Noam, Cohen, Nadav

arXiv.org Machine Learning 

Neural networks are powered by an implicit bias: a tendency of gradient descent to fit training data in a way that generalizes to unseen data. A recent class of neural network models gaining increasing popularity is structured state space models (SSMs), regarded as an efficient alternative to transformers. Prior work argued that the implicit bias of SSMs leads to generalization in a setting where data is generated by a low dimensional teacher. In this paper, we revisit the latter setting, and formally establish a phenomenon entirely undetected by prior work on the implicit bias of SSMs. Namely, we prove that while implicit bias leads to generalization under many choices of training data, there exist special examples whose inclusion in training completely distorts the implicit bias, to a point where generalization fails. This failure occurs despite the special training examples being labeled by the teacher, i.e. having clean labels! We empirically demonstrate the phenomenon, with SSMs trained independently and as part of non-linear neural networks. In the area of adversarial machine learning, disrupting generalization with cleanly labeled training examples is known as clean-label poisoning. Given the proliferation of SSMs, particularly in large language models, we believe significant efforts should be invested in further delineating their susceptibility to clean-label poisoning, and in developing methods for overcoming this susceptibility. Overparameterized neural networks can fit their training data in multiple ways, some of which generalize to unseen data, while others do not. Remarkably, when the training data is fit via gradient descent (or a variant thereof), generalization tends to occur. This phenomenon--one of the greatest mysteries in modern machine learning (Zhang et al., 2021; Chatterjee and Zielinski, 2022)--is often viewed as stemming from an implicit bias: a tendency of gradient descent, when applied to neural network models, to fit training data in a way that complies with common data-generating distributions. The latter view was formalized for several neural network models and data-generating distributions (Neyshabur, 2017; Soudry et al., 2018; Gunasekar et al., 2018; Razin and Cohen, 2020). A recent class of neural network models gaining increasing popularity is structured state space models (SSMs).