MM-GEN: Enhancing Task Performance Through Targeted Multimodal Data Curation
Joshi, Siddharth, Nushi, Besmira, Balachandran, Vidhisha, Chandrasekaran, Varun, Vineet, Vibhav, Joshi, Neel, Mirzasoleiman, Baharan
–arXiv.org Artificial Intelligence
Vision-language models (VLMs) are highly effective but often underperform on specialized tasks; for example, Llava-1.5 struggles with chart and diagram understanding due to scarce task-specific training data. Existing training data, sourced from general-purpose datasets, fails to capture the nuanced details needed for these tasks. We introduce MM-Gen, a scalable method that generates task-specific, high-quality synthetic text for candidate images by leveraging stronger models. MM-Gen employs a three-stage targeted process: partitioning data into subgroups, generating targeted text based on task descriptions, and filtering out redundant and outlier data. Fine-tuning VLMs with data generated by MM-Gen leads to significant performance gains, including 29% on spatial reasoning and 15% on diagram understanding for Llava-1.5 (7B). Compared to human-curated caption data, MM-Gen achieves up to 1.6x better improvements for the original models, proving its effectiveness in enhancing task-specific VLM performance and bridging the gap between general-purpose datasets and specialized requirements. Code available at https://github.com/sjoshi804/MM-Gen.
arXiv.org Artificial Intelligence
Jan-7-2025
- Country:
- Africa
- Middle East > Egypt (0.04)
- Sudan (0.04)
- Asia
- China (0.04)
- Middle East > Republic of Türkiye
- Ankara Province > Ankara (0.04)
- Russia (0.04)
- Atlantic Ocean > Mediterranean Sea (0.04)
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Russia (0.04)
- United Kingdom
- Ireland > Leinster
- North America
- Dominican Republic (0.04)
- Guadeloupe (0.04)
- United States > Alaska (0.04)
- Africa
- Genre:
- Research Report (1.00)
- Industry:
- Education (0.46)
- Technology: