Cross-modal RAG: Sub-dimensional Text-to-Image Retrieval-Augmented Generation
Zhu, Mengdan, Cheng, Senhao, Bai, Guangji, Zhang, Yifei, Zhao, Liang
–arXiv.org Artificial Intelligence
Text-to-image generation increasingly demands access to domain-specific, fine-grained, and rapidly evolving knowledge that pretrained models cannot fully capture, necessitating the integration of retrieval methods. Existing Retrieval-Augmented Generation (RAG) methods attempt to address this by retrieving globally relevant images, but they fail when no single image contains all desired elements from a complex user query. We propose Cross-modal RAG, a novel framework that decomposes both queries and images into sub-dimensional components, enabling subquery-aware retrieval and generation. Our method introduces a hybrid retrieval strategy - combining a sub-dimensional sparse retriever with a dense retriever - to identify a Pareto-optimal set of images, each contributing complementary aspects of the query. During generation, a multimodal large language model is guided to selectively condition on relevant visual features aligned to specific subqueries, ensuring subquery-aware image synthesis. Extensive experiments on MS-COCO, Flickr30K, WikiArt, CUB, and ImageNet-LT demonstrate that Cross-modal RAG significantly outperforms existing baselines in the retrieval and further contributes to generation quality, while maintaining high efficiency.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Asia > China (0.04)
- North America > United States
- California (0.04)
- Michigan > Washtenaw County
- Ann Arbor (0.04)
- Genre:
- Research Report > New Finding (0.68)
- Technology: