Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias
Lu, Rui, Wang, Runzhe, Lyu, Kaifeng, Jiang, Xitai, Huang, Gao, Wang, Mengdi
–arXiv.org Artificial Intelligence
Score-based diffusion models have achieved incredible performance in generating realistic images, audio, and video data. While these models produce high-quality samples with impressive details, they often introduce unrealistic artifacts, such as distorted fingers or hallucinated texts with no meaning. This paper focuses on textual hallucinations, where diffusion models correctly generate individual symbols but assemble them in a nonsensical manner. Through experimental probing, we consistently observe that such phenomenon is attributed it to the network's local generation bias. Denoising networks tend to produce outputs that rely heavily on highly correlated local regions, particularly when different dimensions of the data distribution are nearly pairwise independent. This behavior leads to a generation process that decomposes the global distribution into separate, independent distributions for each symbol, ultimately failing to capture the global structure, including underlying grammar. Intriguingly, this bias persists across various denoising network architectures including MLP and transformers which have the structure to model global dependency. These findings also provide insights into understanding other types of hallucinations, extending beyond text, as a result of implicit biases in the denoising models. Additionally, we theoretically analyze the training dynamics for a specific case involving a two-layer MLP learning parity points on a hypercube, offering an explanation of its underlying mechanism. Inspired by the diffusion process in physics (Sohl-Dickstein et al., 2015), diffusion models learn to generate samples from a specific data distribution by fitting its score function, gradually transforming pure Gaussian noise into desired samples. However, despite the impressively realistic details produced, diffusion models consistently exhibit artifacts in their outputs. One common issue is the generation of plausible low-level features or local details while failing to accurately model complex 3D objects or the underlying semantics (Borji, 2023; Liu et al., 2023).
arXiv.org Artificial Intelligence
Mar-5-2025