Beyond Retrieval: Joint Supervision and Multimodal Document Ranking for Textbook Question Answering

Alawwad, Hessa, Naseem, Usman, Alhothali, Areej, Alkhathlan, Ali, Jamal, Amani

arXiv.org Artificial Intelligence 

--T extbook question answering (TQA) is a complex task, requiring the interpretation of complex multimodal context. Although recent advances have improved overall performance, they often encounter difficulties in educational settings where accurate semantic alignment and task-specific document retrieval are essential. In this paper, we propose a novel approach to multimodal textbook question answering by introducing a mechanism for enhancing semantic representations through multi-objective joint training. Our model, Joint Embedding Training With Ranking Supervision for T extbook Question Answering (JETRTQA), is a multimodal learning framework built on a retriever-generator architecture that uses a retrieval-augmented generation setup, in which a multimodal large language model generates answers. JETRTQA is designed to improve the relevance of retrieved documents in complex educational contexts. Unlike traditional direct scoring approaches, JETRTQA learns to refine the semantic representations of questions and documents through a supervised signal that combines pairwise ranking and implicit supervision derived from answers. We evaluate our method on the CK12-QA dataset and demonstrate that it significantly improves the discrimination between informative and irrelevant documents, even when they are long, complex, and multimodal. JETRTQA outperforms the previous state of the art, achieving a 2.4% gain in accuracy on the validation set and 11.1% on the test set. EXTBOOK question answering (TQA) has emerged as a central challenge in natural language processing because the complexity of educational content requires deep semantic reasoning. TQA involves the analysis of structured, often lengthy, educational documents that are frequently multimodal, incorporating elements such as diagrams, tables, or explanatory images. The retrieved information is then used to generate answers. This process is not a simple fusion; it demands a strategic approach to overcome the fundamental limitations of traditional question-answering (QA) models, which are often unable to effectively handle long, complex, or out-of-domain contexts [1], [2].