UniIR: Training and Benchmarking Universal Multimodal Information Retrievers
Wei, Cong, Chen, Yang, Chen, Haonan, Hu, Hexiang, Zhang, Ge, Fu, Jie, Ritter, Alan, Chen, Wenhu
–arXiv.org Artificial Intelligence
Existing information retrieval (IR) models often assume a homogeneous format, limiting their applicability to diverse user needs, such as searching for images with text descriptions, searching for a news article with a headline image, or finding a similar photo with a query image. To approach such different information-seeking demands, we introduce UniIR, a unified instruction-guided multimodal retriever capable of handling eight distinct retrieval tasks across modalities. UniIR, a single retrieval system jointly trained on ten diverse multimodal-IR datasets, interprets user instructions to execute various retrieval tasks, demonstrating robust performance across existing datasets and zero-shot generalization to new tasks. Our experiments highlight that multi-task training and instruction tuning are keys to UniIR's generalization ability. Additionally, we construct the M-BEIR, a multimodal retrieval benchmark with comprehensive results, to standardize the evaluation of universal multimodal information retrieval.
arXiv.org Artificial Intelligence
Nov-28-2023
- Country:
- Africa (0.67)
- Asia (1.00)
- Europe
- Germany (0.94)
- United Kingdom (0.67)
- North America > United States
- Texas > Hays County > San Marcos (0.14)
- Genre:
- Research Report > New Finding (0.45)
- Industry: