Preserving Knowledge in Large Language Model with Model-Agnostic Self-Decompression
Zhang, Zilun, Sun, Yutao, Zhao, Tiancheng, Sha, Leigang, Xu, Ruochen, Lee, Kyusong, Yin, Jianwei
–arXiv.org Artificial Intelligence
Humans can retain old knowledge while learning new information, but Large Language Models (LLMs) often suffer from catastrophic forgetting when post-pretrained or supervised fine-tuned (SFT) on domain-specific data. Moreover, for Multimodal Large Language Models (MLLMs) which are composed of the LLM base and visual projector (e.g. LLaVA), a significant decline in performance on language benchmarks was observed compared to their single-modality counterparts. To address these challenges, we introduce a novel model-agnostic self-decompression method, Tree Generation (TG), that decompresses knowledge within LLMs into the training corpus. This paper focuses on TG-SFT, which can synthetically generate SFT data for the instruction tuning steps. By incorporating the dumped corpus during SFT for MLLMs, we significantly reduce the forgetting problem.
arXiv.org Artificial Intelligence
Jun-19-2024
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Health & Medicine > Therapeutic Area > Endocrinology (0.46)
- Technology: