CultureCLIP: Empowering CLIP with Cultural Awareness through Synthetic Images and Contextualized Captions
Huang, Yuchen, Fan, Zhiyuan, He, Zhitao, Polisetty, Sandeep, Li, Wenyan, Fung, Yi R.
–arXiv.org Artificial Intelligence
Pretrained vision-language models (VLMs) such as CLIP excel in general multimodal comprehension but often struggle to capture nuanced, context-dependent visual cues. This makes it difficult to distinguish between similar-looking concepts with potentially different cultural meanings. Such deficiencies are mainly due to a limited amount of high-quality cultural data, contextual information, and the lack of negative examples that highlight subtle differences. To mitigate this, we design a data curation pipeline leveraging open-sourced VLMs and text-to-image models to construct CulTwin, a synthetic cultural dataset. This dataset consists of paired concept-caption-image triplets, where concepts visually resemble each other but are culturally different. Then, we fine-tune CLIP on CulTwin to develop CultureCLIP, which aligns cultural concepts with contextually enhanced captions and synthetic images through tailored contrastive learning. Experiments on culture-specific benchmarks show that CultureCLIP outperforms the base CLIP, achieving up to a notable 5.49% improvement in fine-grained concept recognition on certain tasks while preserving CLIP's original generalization ability, validating the effectiveness of our data synthesis and VLM backbone training paradigm in capturing subtle cultural distinctions.
arXiv.org Artificial Intelligence
Jul-17-2025
- Country:
- Africa > Central African Republic
- Ombella-M'Poko > Bimbo (0.04)
- Asia
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Denmark > Capital Region
- North America > United States
- Massachusetts > Hampshire County > Amherst (0.04)
- South America
- Africa > Central African Republic
- Genre:
- Research Report (1.00)
- Technology: