Variational Open-Domain Question Answering
Liévin, Valentin, Motzfeldt, Andreas Geert, Jensen, Ida Riis, Winther, Ole
–arXiv.org Artificial Intelligence
Retrieval-augmented models have proven to be effective in natural language processing tasks, yet there remains a lack of research on their optimization using variational inference. We introduce the Variational Open-Domain (VOD) framework for end-to-end training and evaluation of retrieval-augmented models, focusing on open-domain question answering and language modelling. The VOD objective, a self-normalized estimate of the R\'enyi variational bound, approximates the task marginal likelihood and is evaluated under samples drawn from an auxiliary sampling distribution (cached retriever and/or approximate posterior). It remains tractable, even for retriever distributions defined on large corpora. We demonstrate VOD's versatility by training reader-retriever BERT-sized models on multiple-choice medical exam questions. On the MedMCQA dataset, we outperform the domain-tuned Med-PaLM by +5.3% despite using 2.500$\times$ fewer parameters. Our retrieval-augmented BioLinkBERT model scored 62.9% on the MedMCQA and 55.0% on the MedQA-USMLE. Last, we show the effectiveness of our learned retriever component in the context of medical semantic search.
arXiv.org Artificial Intelligence
May-31-2023
- Country:
- Europe (0.93)
- North America > United States (1.00)
- Genre:
- Research Report (1.00)
- Industry:
- Education > Curriculum
- Subject-Specific Education (0.67)
- Health & Medicine
- Diagnostic Medicine (1.00)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Gastroenterology (0.67)
- Infections and Infectious Diseases (1.00)
- Musculoskeletal (0.67)
- Neurology (1.00)
- Psychiatry/Psychology (0.67)
- Rheumatology (0.67)
- Education > Curriculum
- Technology: