Synthetic Multimodal Question Generation

Wu, Ian, Jayanthi, Sravan, Viswanathan, Vijay, Rosenberg, Simon, Pakazad, Sina, Wu, Tongshuang, Neubig, Graham

arXiv.org Artificial Intelligence 

Multimodal Retrieval Augmented Generation (MMRAG) is a powerful approach to questionanswering over multimodal documents. A key challenge with evaluating MMRAG is the paucity of high-quality datasets matching the question styles and modalities of interest. In light of this, we propose SMMQG, a synthetic data generation framework. SMMQG leverages interplay between a retriever, large language model (LLM) and large multimodal model (LMM) to generate question and answer pairs directly from multimodal documents, with the questions conforming to specified styles and modalities. We use SMMQG to generate an MMRAG dataset of 1024 questions Figure 1: An overview of SMMQG. Given userprovided over Wikipedia documents and evaluate stateof-the-art question style and modality requirements, SMmodels using it, revealing insights MQG selects question sources and produces questions into model performance that are attainable only and answers. The questions are grounded in the selected through style-and modality-specific evaluation question sources, and adhere to the question and modality data. Next, we measure the quality of data produced requirements.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found