Constructing Multimodal Datasets from Scratch for Rapid Development of a Japanese Visual Language Model
Sasagawa, Keito, Maeda, Koki, Sugiura, Issa, Kurita, Shuhei, Okazaki, Naoaki, Kawahara, Daisuke
–arXiv.org Artificial Intelligence
To develop high-performing Visual Language Models (VLMs), it is essential to prepare multimodal resources, such as image-text pairs, interleaved data, and instruction data. While multimodal resources for English are abundant, there is a significant lack of corresponding resources for non-English languages, such as Japanese. To address this problem, we take Japanese as a non-English language and propose a method for rapidly creating Japanese multimodal datasets from scratch. We collect Japanese image-text pairs and interleaved data from web archives and generate Japanese instruction data directly from images using an existing VLM. Our experimental results show that a VLM trained on these native datasets outperforms those relying on machine-translated content.
arXiv.org Artificial Intelligence
Oct-30-2024
- Country:
- Asia > Japan
- Honshū (0.14)
- Europe (0.46)
- North America > United States
- Texas (0.14)
- Asia > Japan
- Genre:
- Research Report (0.70)
- Industry:
- Education (0.68)
- Technology: