LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation
Lu, Yujie, Yang, Xianjun, Li, Xiujun, Wang, Xin Eric, Wang, William Yang
–arXiv.org Artificial Intelligence
Existing automatic evaluation on text-to-image synthesis can only provide an image-text matching score, without considering the object-level compositionality, which results in poor correlation with human judgments. In this work, we propose LLMScore, a new framework that offers evaluation scores with multi-granularity compositionality. LLMScore leverages the large language models (LLMs) to evaluate text-to-image models. Initially, it transforms the image into image-level and object-level visual descriptions. Then an evaluation instruction is fed into the LLMs to measure the alignment between the synthesized image and the text, ultimately generating a score accompanied by a rationale. Our substantial analysis reveals the highest correlation of LLMScore with human judgments on a wide range of datasets (Attribute Binding Contrast, Concept Conjunction, MSCOCO, DrawBench, PaintSkills). Notably, our LLMScore achieves Kendall's tau correlation with human evaluations that is 58.8% and 31.2% higher than the commonly-used text-image matching metrics CLIP and BLIP, respectively.
arXiv.org Artificial Intelligence
May-18-2023
- Country:
- Europe
- Netherlands > North Holland
- Amsterdam (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Netherlands > North Holland
- North America
- Dominican Republic (0.04)
- United States
- California
- Santa Barbara County > Santa Barbara (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- Michigan > Washtenaw County
- Ann Arbor (0.04)
- Washington > King County
- Seattle (0.04)
- California
- Europe
- Genre:
- Research Report (1.00)
- Technology: