Learning What to Trust: Bayesian Prior-Guided Optimization for Visual Generation

Liu, Ruiying, Liang, Yuanzhi, Huang, Haibin, Yu, Tianshu, Zhang, Chi

arXiv.org Artificial Intelligence 

Group Relative Policy Optimization (GRPO) has emerged as an effective and lightweight framework for post-training visual generative models. However, its performance is fundamentally limited by the ambiguity of textual-visual correspondence: a single prompt may validly describe diverse visual outputs, and a single image or video may support multiple equally correct interpretations. This many-to-many relationship leads reward models to generate uncertain and weakly discriminative signals, causing GRPO to underutilize reliable feedback and overfit noisy ones. We introduce Bayesian Prior-Guided Optimization (BPGO), a novel extension of GRPO that explicitly models reward uncertainty through a semantic prior anchor. BPGO adaptively modulates optimization trust at two levels: inter-group Bayesian trust allocation emphasizes updates from groups consistent with the prior while down-weighting ambiguous ones, and intra-group prior-anchored renormalization sharpens sample distinctions by expanding confident deviations and compressing uncertain scores. Across both image and video generation tasks, BPGO delivers consistently stronger semantic alignment, enhanced perceptual fidelity, and faster convergence than standard GRPO and recent variants. Among these, Group Relative Policy Optimization (GRPO) Shao et al. (2024); Guo et al. (2025) has emerged as a promising framework, providing stable optimization and noticeable gains in visual quality, motion smoothness, and temporal coherence Xue et al. (2025). However, despite these advances, the semantic alignment between text prompts and generated videos and images remains a persistent weakness--models often produce visually plausible yet semantically mismatched results.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found