Avoiding Copyright Infringement via Machine Unlearning
Dou, Guangyao, Liu, Zheyuan, Lyu, Qing, Ding, Kaize, Wong, Eric
–arXiv.org Artificial Intelligence
This scenario involves unlearning specific books over time, followed by subsequent Large Language Models (LLMs) (Brown et al., unlearning requests. An effective algorithm 2020; Chowdhery et al., 2023; Touvron et al., 2023) should be stable, meaning it should ensure unlearning have made significant progress through pre-training efficacy--removing unwanted knowledge effectively--while on extensive transformer-based architectures and maintaining locality, preserving learning from diverse text data (Ouyang et al., 2022; non-targeted knowledge and the model's reasoning Kojima et al., 2022; Qin et al., 2023; Lewkowycz ability. Few works have studied this setting, et al., 2022; Roziere et al., 2023; Lyu et al., 2023; leaving it unclear if existing methods are suitable.
arXiv.org Artificial Intelligence
Jun-16-2024