MMAD: The First-Ever Comprehensive Benchmark for Multimodal Large Language Models in Industrial Anomaly Detection
Jiang, Xi, Li, Jian, Deng, Hanqiu, Liu, Yong, Gao, Bin-Bin, Zhou, Yifeng, Li, Jialin, Wang, Chengjie, Zheng, Feng
–arXiv.org Artificial Intelligence
In the field of industrial inspection, Multimodal Large Language Models (MLLMs) have a high potential to renew the paradigms in practical applications due to their robust language capabilities and generalization abilities. However, despite their impressive problem-solving skills in many domains, MLLMs' ability in industrial anomaly detection has not been systematically studied. To bridge this gap, we present MMAD, the first-ever full-spectrum MLLMs benchmark in industrial Anomaly Detection. We defined seven key subtasks of MLLMs in industrial inspection and designed a novel pipeline to generate the MMAD dataset with 39,672 questions for 8,366 industrial images. With MMAD, we have conducted a comprehensive, quantitative evaluation of various state-of-theart MLLMs. The commercial models performed the best, with the average accuracy of GPT-4o models reaching 74.9%. However, this result falls far short of industrial requirements. Our analysis reveals that current MLLMs still have significant room for improvement in answering questions related to industrial anomalies and defects. We further explore two training-free performance enhancement strategies to help models improve in industrial scenarios, highlighting their promising potential for future research. The code and data are available at https://github.com/jam-cc/MMAD. Automatic vision inspection is a crucial challenge in realizing an unmanned factory (Benbarrad et al., 2021). Traditional AI research for automatic vision inspection, such as industrial anomaly detection (IAD) (Jiang et al., 2022b; Ren et al., 2022), typically relies on discriminative models within the conventional deep learning paradigm. These models can only perform trained detection tasks and cannot provide detailed reports like quality inspection workers. The development of MLLMs (Jin et al., 2024) has the potential to alter this situation. These generative models can flexibly produce the required textual output based on input language and visual prompts, allowing us to guide the model using language similar to instructing humans. Nowadays, multimodal large language models, represented by GPT-4 (Achiam et al., 2023), can already do many human jobs, especially high-paying intellectual jobs like programmers, writers, and data analysts (Eloundou et al., 2023). In comparison, the work of quality inspectors is simple, typically not requiring a high level of education but relying heavily on work experience.
arXiv.org Artificial Intelligence
Jan-7-2025