Large Language Models can be Strong Self-Detoxifiers

Ko, Ching-Yun, Chen, Pin-Yu, Das, Payel, Mroueh, Youssef, Dan, Soham, Kollias, Georgios, Chaudhury, Subhajit, Pedapati, Tejaswini, Daniel, Luca

arXiv.org Artificial Intelligence 

This paper contains examples that may be considered offensive and inappropriate. Reducing the likelihood of generating harmful and toxic output is an essential task when aligning large language models (LLMs). Existing methods mainly rely on training an external reward model (i.e., another language model) or fine-tuning the LLM using self-generated data to influence the outcome. In this paper, we show that LLMs have the capability of self-detoxification without the use of an additional reward model or re-training. We propose Self-disciplined Autoregressive Sampling (SASA), a lightweight controlled decoding algorithm for toxicity reduction of LLMs. SASA leverages the contextual representations from an LLM to learn linear subspaces characterizing toxic v.s. When auto-completing a response token-by-token, SASA dynamically tracks the margin of the current output to steer the generation away from the toxic subspace, by adjusting the autoregressive sampling strategy. Recent advancements in large language models (LLMs) have dramatically enhanced their capabilities in textual understanding and reasoning (Brown et al., 2020; Kojima et al., 2022). Their capabilities in performing diverse linguistic tasks and producing coherent texts have catalyzed their adoption across a variety of applications (Rae et al., 2021; Hoffmann et al., 2022; Le Scao et al., 2023; Touvron et al., 2023a;b; Achiam et al., 2023). However, with the escalating size of models (Raffel et al., 2020; Brown et al., 2020; Achiam et al., 2023), there is a corresponding increase in the scale of the training datasets required to avert overfitting and to encapsulate extensive world knowledge. These extensive datasets, predominantly derived from internet crawls and merely subjected to basic filtering protocols (Raffel et al., 2020), often harbor biases that are problematic or directly detrimental for many applications and may not inherently align with these desirable attributes (Wallace et al., 2019; Gehman et al., 2020). In fact, it is known that language models trained on such data may not only mimic but also amplify these biases (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2018; Sheng et al., 2019; Gehman et al., 2020; Hartvigsen et al., For example, an "aligned" LLM may be inadvertently or maliciously tricked into generating harmful or toxic output that causes usage violations and safety concerns (Sun et al., 2024).