Argus Inspection: Do Multimodal Large Language Models Possess the Eye of Panoptes?
Yao, Yang, Li, Lingyu, Song, Jiaxin, Chen, Chiyu, He, Zhenqi, Wang, Yixu, Wang, Xin, Gu, Tianle, Li, Jie, Teng, Yan, Wang, Yingchun
–arXiv.org Artificial Intelligence
As Multimodal Large Language Models (MLLMs) continue to evolve, their cognitive and reasoning capabilities have seen remarkable progress. However, challenges in visual fine-grained perception and commonsense causal inference persist. This paper introduces Argus Inspection, a multimodal benchmark with two levels of difficulty, emphasizing detailed visual recognition while incorporating real-world commonsense understanding to evaluate causal reasoning abilities. Expanding on it, we present the Eye of Panoptes framework, which integrates a binary parametric Sigmoid metric with an indicator function, enabling a more holistic evaluation of MLLMs' responses in opinion-based reasoning tasks. Experiments conducted on 26 mainstream MLLMs reveal that the highest performance in visual fine-grained reasoning reaches only 0.46, highlighting considerable potential for enhancement. Our research offers valuable perspectives for the continued refinement of MLLMs.
arXiv.org Artificial Intelligence
Aug-13-2025
- Country:
- Asia > China
- Guangdong Province > Shenzhen (0.04)
- Hong Kong (0.04)
- Shanghai > Shanghai (0.05)
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Poland (0.04)
- Ireland > Leinster
- North America > United States
- New York > New York County > New York City (0.04)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Therapeutic Area (1.00)
- Law Enforcement & Public Safety > Fire & Emergency Services (0.67)
- Transportation > Ground
- Road (1.00)
- Technology: