Language Detoxification with Attribute-Discriminative Latent Space
Kwak, Jin Myung, Kim, Minseon, Hwang, Sung Ju
–arXiv.org Artificial Intelligence
Transformer-based Language Models (LMs) have achieved impressive results on natural language understanding tasks, but they can also generate toxic text such as insults, threats, and profanity, limiting their real-world applications. To overcome this issue, a few text generation approaches aim to detoxify toxic texts using additional LMs or perturbations. However, previous methods require excessive memory, computations, and time which are serious bottlenecks in their real-world application. To address such limitations, we propose an effective yet efficient method for language detoxification using an attribute-discriminative latent space. Specifically, we project the latent space of an original Transformer LM onto a discriminative latent space that well-separates texts by their attributes using a projection block and an attribute discriminator. This allows the LM to control the text generation to be non-toxic with minimal memory and computation overhead. We validate our model, Attribute-Discriminative Language Model (ADLM) on detoxified language and dialogue generation tasks, on which our method significantly outperforms baselines both in performance and efficiency.
arXiv.org Artificial Intelligence
Jul-5-2023
- Country:
- North America > United States (0.93)
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance (0.67)
- Education (0.67)
- Government > Regional Government
- Law (1.00)
- Media (0.92)
- Technology: