A Culturally-diverse Multilingual Multimodal Video Benchmark & Model
Shafique, Bhuiyan Sanjid, Vayani, Ashmal, Maaz, Muhammad, Rasheed, Hanoona Abdul, Dissanayake, Dinura, Kurpath, Mohammed Irfan, Hmaiti, Yahya, Inoue, Go, Lahoud, Jean, Rashid, Md. Safirur, Quasem, Shadid Intisar, Fatima, Maheen, Vidal, Franco, Maslych, Mykola, More, Ketan Pravin, Baliah, Sanoojan, Watawana, Hasindri, Li, Yuhao, Farestam, Fabian, Schaller, Leon, Tymtsiv, Roman, Weber, Simon, Cholakkal, Hisham, Laptev, Ivan, Satoh, Shin'ichi, Felsberg, Michael, Shah, Mubarak, Khan, Salman, Khan, Fahad Shahbaz
–arXiv.org Artificial Intelligence
Large multimodal models (LMMs) have recently gained attention due to their effectiveness to understand and generate descriptions of visual content. Most existing LMMs are in English language. While few recent works explore multilingual image LMMs, to the best of our knowledge, moving beyond the English language for cultural and linguistic inclusivity is yet to be investigated in the context of video LMMs. In pursuit of more inclusive video LMMs, we introduce a multilingual Video LMM benchmark, named ViMUL-Bench, to evaluate Video LMMs across 14 languages, including both low- and high-resource languages: English, Chinese, Spanish, French, German, Hindi, Arabic, Russian, Bengali, Urdu, Sinhala, Tamil, Swedish, and Japanese. Our ViMUL-Bench is designed to rigorously test video LMMs across 15 categories including eight culturally diverse categories, ranging from lifestyles and festivals to foods and rituals and from local landmarks to prominent cultural personalities. ViMUL-Bench comprises both open-ended (short and long-form) and multiple-choice questions spanning various video durations (short, medium, and long) with 8k samples that are manually verified by native language speakers. In addition, we also introduce a machine translated multilingual video training set comprising 1.2 million samples and develop a simple multilingual video LMM, named ViMUL, that is shown to provide a better tradeoff between high-and low-resource languages for video understanding. We hope our ViMUL-Bench and multilingual video LMM along with a large-scale multilingual video training set will help ease future research in developing cultural and linguistic inclusive multilingual video LMMs. Our proposed benchmark, video LMM and training data will be publicly released at https://mbzuai-oryx.github.io/ViMUL/.
arXiv.org Artificial Intelligence
Oct-1-2025
- Country:
- Africa > Middle East
- Asia
- Bangladesh > Dhaka Division
- Dhaka District > Dhaka (0.04)
- China (0.04)
- India (0.04)
- Japan (0.04)
- Middle East
- Pakistan (0.04)
- Russia (0.04)
- Sri Lanka (0.04)
- Bangladesh > Dhaka Division
- Europe
- Finland (0.04)
- France (0.04)
- Germany > North Rhine-Westphalia
- Upper Bavaria > Munich (0.04)
- Russia (0.04)
- Spain (0.04)
- Sweden > Östergötland County
- Linköping (0.04)
- Switzerland > Zürich
- Zürich (0.04)
- Ukraine (0.04)
- North America > United States (0.14)
- South America > Chile
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Leisure & Entertainment > Sports (0.46)
- Media (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language
- Chatbot (0.97)
- Large Language Model (1.00)
- Machine Translation (0.67)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence