Exploring Image Generation via Mutually Exclusive Probability Spaces and Local Correlation Hypothesis
–arXiv.org Artificial Intelligence
A common assumption in probabilistic generative models for image generation is that learning the global data distribution suffices to generate novel images via sampling. We investigate the limitation of this core assumption, namely that learning global distributions leads to memorization rather than generative behavior. We propose two theoretical frameworks, the Mutually Exclusive Probability Space (MEPS) and the Local Dependence Hypothesis (LDH), for investigation. MEPS arises from the observation that deterministic mappings (e.g. We further propose a lower bound in terms of the overlap coefficient, and introduce a Binary Latent Autoencoder (BL-AE) that encodes images into signed binary latent representations. LDH formalizes dependence within a finite observation radius, which motivates our γ- Autoregressive Random V ariable Model (γ-ARVM). Using γ-ARVM, we observe that as the observation range increases, autoregressive models progressively shift toward memorization. In the limit of global dependence, the model behaves as a pure memorizer when operating on the binary latents produced by our BL-AE. Comprehensive experiments and discussions support our investigation. Figure 1: Selecting images for values in the overlap range is ambiguous. Probabilistic generative models such as V aria-tional Autoencoders (V AEs), Generative Adversarial Networks (GANs), diffusion models, and autoregressive models have achieved remarkable progress in image generation. A core assumption is that these models learn an image distribution from which new images can be generated via sampling (Bond-Taylor et al., 2022). Specifically, we focus on au-toregressive models. For this investigation, we introduce two theoretical frameworks.
arXiv.org Artificial Intelligence
Sep-24-2025