Vision language models are blind
Rahmanzadehgervi, Pooyan, Bolton, Logan, Taesiri, Mohammad Reza, Nguyen, Anh Totti
–arXiv.org Artificial Intelligence
Large language models with vision capabilities (VLMs), e.g., GPT-4o and Gemini 1.5 Pro are powering countless image-text applications and scoring high on many vision-understanding benchmarks. We propose BlindTest, a suite of 7 visual tasks absurdly easy to humans such as identifying (a) whether two circles overlap; (b) whether two lines intersect; (c) which letter is being circled in a word; and (d) counting the number of circles in a Olympic-like logo. Surprisingly, four state-of-the-art VLMs are, on average, only 56.20% accurate on our benchmark, with \newsonnet being the best (73.77% accuracy). On BlindTest, VLMs struggle with tasks that requires precise spatial information and counting (from 0 to 10), sometimes providing an impression of a person with myopia seeing fine details as blurry and making educated guesses. Code is available at: https://vlmsareblind.github.io/
arXiv.org Artificial Intelligence
Jul-12-2024
- Country:
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Ireland > Leinster
- North America
- Canada > Alberta (0.14)
- United States (0.04)
- South America > Colombia
- Meta Department > Villavicencio (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Technology: