Automated Filtering of Human Feedback Data for Aligning Text-to-Image Diffusion Models

Yang, Yongjin, Kim, Sihyeon, Jung, Hojung, Bae, Sangmin, Kim, SangMook, Yun, Se-Young, Lee, Kimin

arXiv.org Artificial Intelligence 

Fine-tuning text-to-image diffusion models with human feedback is an effective method for aligning model behavior with human intentions. However, this alignment process often suffers from slow convergence due to the large size and noise present in human feedback datasets. In this work, we propose FiFA, a novel automated data filtering algorithm designed to enhance the fine-tuning of diffusion models using human feedback datasets with direct preference optimization (DPO). Specifically, our approach selects data by solving an optimization problem to maximize three components: preference margin, text quality, and text diversity. The concept of preference margin is used to identify samples that contain high informational value to address the noisy nature of feedback dataset, which is calculated using a proxy reward model. Additionally, we incorporate text quality, assessed by large language models to prevent harmful contents, and consider text diversity through a k-nearest neighbor entropy estimator to improve generalization. Finally, we integrate all these components into an optimization process, with approximating the solution by assigning importance score to each data pair and selecting the most important ones. As a result, our method efficiently filters data automatically, without the need for manual intervention, and can be applied to any large-scale dataset. Experimental results show that FiFA significantly enhances training stability and achieves better performance, being preferred by humans 17% more, while using less than 0.5% of the full data and thus 1% of the GPU hours compared to utilizing full human feedback datasets. Warning: This paper contains offensive contents that may be upsetting. Large-scale models trained on extensive web-scale datasets using diffusion techniques (Ho et al., 2020; Song et al., 2020), such as Stable Diffusion (Rombach et al., 2022), Dall-E (Ramesh et al., 2022), and Imagen (Saharia et al., 2022), have enabled the generation of high-fidelity images from diverse text prompts. However, several failure cases remain, such as difficulties in illustrating text content, incorrect counting, or insufficient aesthetics for certain text prompts (Lee et al., 2023; Fan et al., 2024; Black et al., 2023). Fine-tuning text-to-image diffusion models using human feedback has recently emerged as a powerful approach to address this issue (Black et al., 2023; Fan et al., 2024; Prabhudesai et al., 2023; Clark et al., 2023). Unlike the conventional optimization strategy of likelihood maximization, this framework first trains reward models using human feedback (Kirstain et al., 2024; Wu et al., 2023; Xu et al., 2024) and then fine-tunes the diffusion models to maximize reward scores through policy gradient (Fan et al., 2024; Black et al., 2023) or reward-gradient based techniques (Prabhudesai et al., 2023; Clark et al., 2023).