Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning

Liu, Yujian, Chang, Shiyu, Jaakkola, Tommi, Zhang, Yang

arXiv.org Artificial Intelligence 

Recent studies have identified one aggravating factor of LLM hallucinations as the knowledge inconsistency between pre-training and fine-tuning, where unfamiliar fine-tuning data mislead the LLM to fabricate plausible but wrong outputs. It also opens new possibilities for knowledge-controlled generation in LLMs. Hallucination of large language models (LLMs) refers to the phenomenon where LLMs' outputs look plausible but diverge from real-world facts. It has become a major concern of LLMs, seriously undermining their reliability and trustworthiness (Huang et al., 2023; Ji et al., 2023). Recent research has unveiled one aggravating factor of LLM hallucination, which is the knowledge inconsistency between the pre-training and tuning (e.g., instruction-or fine-tuning) stages (Gekhman et al., 2024; Kang et al., 2024; Lin et al., 2024). More specifically, if the tuning stage involves training examples that require knowledge that an LLM has not seen during pre-training, then the LLM would be misled to fabricate plausible but wrong answers to unfamiliar questions (Schulman, 2023; Gao, 2021; Goldberg, 2023). For example, consider fine-tuning a model for a question answering (QA) task with the example'When was John Estes born?' and assume that the LLM has never learned about John Estes during pre-training. However, since the LLM is still trained to produce the correct answer, '1987', it is consequently encouraged to respond with a random legitimate year whenever it is asked about the birth year of any unknown person, thus giving rise to hallucination. These findings highlight an important but previously understudied consideration of LLM training, which is the disentanglement between knowledge and skill. Specifically, it is discovered that knowledge and skills are acquired at different stages of LLM training, the former at pre-training, and the latter at tuning (Zhou et al., 2023; Gudibande et al., 2024). However, although the focus in the tuning stage is to learn skills, not knowledge, the learning process is still interfered with by any inconsistency in the knowledge aspect, because the information on the two aspects is entangled.