Can Argus Judge Them All? Comparing VLMs Across Domains

Joshi, Harsh, Kashyap, Gautam Siddharth, Ali, Rafiq, Shabbir, Ebad, Jain, Niharika, Jain, Sarthak, Gao, Jiechao, Naseem, Usman

arXiv.org Artificial Intelligence 

Vision-Language Models (VLMs) are advancing multimodal AI, yet their performance consistency across tasks is underexamined. We benchmark CLIP, BLIP, and LXMERT across diverse datasets spanning retrieval, captioning, and reasoning. Our evaluation includes task accuracy, generation quality, efficiency, and a novel Cross-Dataset Consistency (CDC) metric. CLIP shows strongest generalization (CDC: 0.92), BLIP excels on curated data, and LXMERT leads in structured reasoning. These results expose trade-offs between generalization and specialization, informing industrial deployment of VLMs and guiding development toward robust, task-flexible architectures.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found