When to Read Documents or QA History: On Unified and Selective Open-domain QA
Lee, Kyungjae, Han, Sang-eun, Hwang, Seung-won, Lee, Moontae
–arXiv.org Artificial Intelligence
Figure 1 illustrates the distinction of Open-domain question answering is a well-known our approach providing both knowledge to a unified task in natural language processing, aiming to answer reader as context. We retrieve a list of relevant factoid questions from an open set of domains. QA-pairs (called as QA-history), then treat the One commonly used approach for this task is the few retrieved QA examples, as if it is a relevant retrieve-then-read pipeline (also known as Openbook document passage. QA) to retrieve relevant knowledge, then reason Meanwhile, the closest approach to use multiple answers over the knowledge. Given the wide knowledge sources is concatenating the multisources range of topics that open-domain questions can uniformly into a single decoder (Oguz cover, a key to a successful answering model is: et al., 2020), but we argue knowledge selection is to access and utilize diverse knowledge sources critically missing. To motivate, Figure 1 shows the effectively. QA-history, from which answer'Eric Liddell' is Toward this goal, existing work can be categorized explicitly identified, while it is more implicit in the by the knowledge source used: document such that another name such as'Hugh Hudson' is known to often confuse QA models. It Document Corpus-based QA (Doc-QA): This is critical for the QA model to calibrate prediction type of work utilizes a general-domain Document quality as an indicator to decide when to use a Corpus (e.g., Wikipedia) (Karpukhin
arXiv.org Artificial Intelligence
Jun-7-2023
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report (1.00)
- Industry:
- Leisure & Entertainment (0.68)
- Media > Film (0.46)
- Technology: