Resolving Editing-Unlearning Conflicts: A Knowledge Codebook Framework for Large Language Model Updating
Zhang, Binchi, Chen, Zhengzhang, Zheng, Zaiyi, Li, Jundong, Chen, Haifeng
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) excel in natural language processing by encoding extensive human knowledge, but their utility relies on timely updates as knowledge evolves. Updating LLMs involves two key tasks simultaneously: unlearning to remove unwanted knowledge and editing to incorporate new information. Existing methods face two major challenges: ineffective knowledge storage (either too sparse or too dense) and task conflicts between editing and unlearning, as validated through our theoretical and experimental results. To address these issues, we propose LOKA, a conflict-free framework for LLM updating based on a knowledge codebook. During training, updated knowledge is stored in multiple codebook memories. To optimize knowledge storage, a similarity-aware knowledge mapping ensures that related knowledge pieces are clustered and allocated to the same memory. Additionally, LOKA resolves task conflicts by employing task-specific and multi-task memories guided by a conflict score. In the inference stage, LOKA retrieves the most relevant memory from the codebook and plugs it into the original LLM to apply the updated knowledge. A learning-based router controls codebook activation to further improve knowledge utilization. Extensive experiments demonstrate the effectiveness of LOKA in LLM knowledge updating tasks.
arXiv.org Artificial Intelligence
Jan-31-2025
- Country:
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Education (0.46)
- Health & Medicine (0.46)
- Information Technology > Security & Privacy (0.46)
- Technology: