Unsupervised multiple choices question answering via universal corpus
Zhang, Qin, Ge, Hao, Chen, Xiaojun, Fang, Meng
–arXiv.org Artificial Intelligence
Fabbri et al [8] and Li et al [11] further extended this idea with template-based Unsupervised question answering is a promising yet challenging question generation and iterative data refinement, but are still task, which alleviates the burden of building large-scale only applicable to EQA tasks. There are also some trials annotated data in a new domain. It motivates us to study the for MCQA without supervision. Liu and Lee [12] assumed unsupervised multiple-choice question answering (MCQA) the absence of correct answer labels, but directly train a QA problem. In this paper, we propose a novel framework designed model based on the context, question, and answer candidate to generate synthetic MCQA data barely based on sets. Ren and Zhu [13] emphasized the distractor generation, contexts from the universal domain without relying on any trying to construct a complete sample using the given context, form of manual annotation. Possible answers are extracted question, as well as the correct answer. Nevertheless, they and used to produce related questions, then we leverage still depend on a certain amount of data in the target domain, both named entities (NE) and knowledge graphs to discover like the contexts and questions, which further limits their plausible distractors to form complete synthetic samples.
arXiv.org Artificial Intelligence
Feb-27-2024
- Country:
- Asia > China (0.15)
- Europe > United Kingdom (0.14)
- Genre:
- Questionnaire & Opinion Survey (0.62)
- Research Report (0.82)
- Industry:
- Education (0.62)
- Technology: