Cycle Consistency as Reward: Learning Image-Text Alignment without Human Preferences
Bahng, Hyojin, Chan, Caroline, Durand, Fredo, Isola, Phillip
–arXiv.org Artificial Intelligence
Measuring alignment between language and vision is a fundamental challenge, especially as multimodal data becomes increasingly detailed and complex. Existing methods often rely on collecting human or AI preferences, which can be costly and time-intensive. We propose an alternative approach that leverages cycle consistency as a supervisory signal. Given an image and generated text, we map the text back to image space using a text-to-image model and compute the similarity between the original image and its reconstruction. Analogously, for text-to-image generation, we measure the textual similarity between an input caption and its reconstruction through the cycle. We use the cycle consistency score to rank candidates and construct a preference dataset of 866K comparison pairs. The reward model trained on our dataset, CycleReward, outperforms state-of-the-art alignment metrics on detailed captioning, with superior inference-time scalability when used as a verifier for Best-of-N sampling, while maintaining speed and differentiability. Furthermore, performing DPO and Diffusion DPO using our dataset enhances performance across a wide range of vision-language tasks and text-to-image generation. Our dataset, model, and code are publicly released at https://cyclereward.github.io.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Europe
- Netherlands > North Holland
- Amsterdam (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Netherlands > North Holland
- North America > United States
- Massachusetts > Middlesex County > Cambridge (0.04)
- Europe
- Genre:
- Research Report (0.64)
- Technology: