Goto

Collaborating Authors

 image error






WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences

Lu, Yujie, Jiang, Dongfu, Chen, Wenhu, Wang, William Yang, Choi, Yejin, Lin, Bill Yuchen

arXiv.org Artificial Intelligence

Recent breakthroughs in vision-language models (VLMs) emphasize the necessity of benchmarking human preferences in real-world multimodal interactions. To address this gap, we launched WildVision-Arena (WV-Arena), an online platform that collects human preferences to evaluate VLMs. We curated WV-Bench by selecting 500 high-quality samples from 8,000 user submissions in WV-Arena. WV-Bench uses GPT-4 as the judge to compare each VLM with Claude-3-Sonnet, achieving a Spearman correlation of 0.94 with the WV-Arena Elo. This significantly outperforms other benchmarks like MMVet, MMMU, and MMStar. Our comprehensive analysis of 20K real-world interactions reveals important insights into the failure cases of top-performing VLMs. For example, we find that although GPT-4V surpasses many other models like Reka-Flash, Opus, and Yi-VL-Plus in simple visual recognition and reasoning tasks, it still faces challenges with subtle contextual cues, spatial reasoning, visual imagination, and expert domain knowledge. Additionally, current VLMs exhibit issues with hallucinations and safety when intentionally provoked. We are releasing our chat and feedback data to further advance research in the field of VLMs.


DiFoRem: artificial intelligence assists automated driving

#artificialintelligence

Frankfurt (IAA), 12th September 2019 EDAG BFFT Electronics have developed software that uses artificial intelligence to support assisted and automated driving, even when visibility is poor. The DiFoRem (Dirt & Fog Removal) system is able to compensate for image errors caused by dirt, fogging or camera lens defects with the help of neural networks in real time. The reconstructed image can then be used by other assistance systems or for automated driving, providing a significant increase in image and information quality. In order to digitally compensate for image errors, the impaired areas of every single ingoing image are first identified algorithmically and marked accordingly. To this end, neural networks have been trained to learn the relation between image sections with and without errors.