LOTUS: A Leaderboard for Detailed Image Captioning from Quality to Societal Bias and User Preferences
Hirota, Yusuke, Li, Boyi, Hachiuma, Ryo, Wu, Yueh-Hua, Ivanovic, Boris, Nakashima, Yuta, Pavone, Marco, Choi, Yejin, Wang, Yu-Chiang Frank, Yang, Chao-Han Huck
–arXiv.org Artificial Intelligence
Large Vision-Language Models (LVLMs) have transformed image captioning, shifting from concise captions to detailed descriptions. We introduce LOTUS, a leaderboard for evaluating detailed captions, addressing three main gaps in existing evaluations: lack of standardized criteria, bias-aware assessments, and user preference considerations. LOTUS comprehensively evaluates various aspects, including caption quality (e.g., alignment, descriptiveness), risks (\eg, hallucination), and societal biases (e.g., gender bias) while enabling preference-oriented evaluations by tailoring criteria to diverse user preferences. Our analysis of recent LVLMs reveals no single model excels across all criteria, while correlations emerge between caption detail and bias risks. Preference-oriented evaluations demonstrate that optimal model selection depends on user priorities.
arXiv.org Artificial Intelligence
Dec-2-2025
- Genre:
- Research Report > New Finding (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Representation & Reasoning > Personal Assistant Systems (0.82)
- Vision (1.00)
- Information Technology > Artificial Intelligence