Leveraging Inter-Chunk Interactions for Enhanced Retrieval in Large Language Model-Based Question Answering
Guo, Tiezheng, Wang, Chen, Liu, Yanyi, Tang, Jiawei, Li, Pan, Xu, Sai, Yang, Qingwen, Gao, Xianlin, Li, Zhi, Wen, Yingyou
–arXiv.org Artificial Intelligence
However, Large langugae models (LLM) have acquired superior reading when dealing with complex multi-document question answering comprehension and reasoning capabilities by pretraining on (MDQA) tasks, accurately understanding the question's extensive natural langugae data [1, 2]. They have demonstrated constraints and covering all supporting evidence remains an remarkable performance on a variety of tasks and benchmarks, open challenge [10, 11]. This difficulty arises because previous particularly in the realm of question answering (QA) [3, 4]. Researchers research has treated the relationship between each text chunk are expanding the parameter scale of these models to and the target question in isolation. The retrieval models have enable them to retain more knowledge [5]. However, due to the concentrated solely on whether the main topic of each chunk absence of efficient methods to evaluate or edit their internalized aligns with the question [12]. Imperfect preprocessing can lead knowledge [6], knowledge-intensive tasks remain a major to the incorrect truncation of continuous chunks.
arXiv.org Artificial Intelligence
Aug-5-2024
- Country:
- Asia
- China > Liaoning Province
- Shenyang (0.04)
- Indonesia > Bali (0.04)
- Middle East
- Singapore (0.04)
- China > Liaoning Province
- Europe
- Belgium (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- Sint Maarten > Philipsburg (0.04)
- United States > Texas
- Travis County > Austin (0.04)
- Canada > Ontario
- Asia
- Genre:
- Research Report (0.50)
- Technology: