Do Large Multimodal Models Solve Caption Generation for Scientific Figures? Lessons Learned from SciCap Challenge 2023
Hsu, Ting-Yao E., Hsu, Yi-Li, Rohatgi, Shaurya, Huang, Chieh-Yang, Ng, Ho Yin Sam, Rossi, Ryan, Kim, Sungchul, Yu, Tong, Ku, Lun-Wei, Giles, C. Lee, Huang, Ting-Hao K.
–arXiv.org Artificial Intelligence
Since the SciCap datasets launch in 2021, the research community has made significant progress in generating captions for scientific figures in scholarly articles. In 2023, the first SciCap Challenge took place, inviting global teams to use an expanded SciCap dataset to develop models for captioning diverse figure types across various academic fields. At the same time, text generation models advanced quickly, with many powerful pre-trained large multimodal models (LMMs) emerging that showed impressive capabilities in various vision-and-language tasks. This paper presents an overview of the first SciCap Challenge and details the performance of various models on its data, capturing a snapshot of the fields state. We found that professional editors overwhelmingly preferred figure captions generated by GPT-4V over those from all other models and even the original captions written by authors. Following this key finding, we conducted detailed analyses to answer this question: Have advanced LMMs solved the task of generating captions for scientific figures?
arXiv.org Artificial Intelligence
Feb-18-2025
- Country:
- Europe (1.00)
- North America > United States
- Kansas > Cowley County (0.24)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Education (1.00)
- Technology: