Self-Prompting Large Language Models for Zero-Shot Open-Domain QA
Li, Junlong, Zhang, Zhuosheng, Zhao, Hai
–arXiv.org Artificial Intelligence
Open-Domain Question Answering (ODQA) aims at answering factoid questions without explicitly providing specific background documents. In a zero-shot setting, this task is more challenging since no data is available to train customized models like Retriever-Readers. Recently, Large Language Models (LLMs) like GPT-3 have shown their power in zero-shot ODQA with direct prompting methods, but these methods are still far from releasing the full powerfulness of LLMs only in an implicitly invoking way. In this paper, we propose a Self-Prompting framework to explicitly utilize the massive knowledge stored in the parameters of LLMs and their strong instruction understanding abilities. Concretely, we prompt LLMs step by step to generate multiple pseudo QA pairs with background passages and explanations from scratch and then use those generated elements for in-context learning. Experimental results show our method surpasses previous SOTA methods significantly on three widely-used ODQA datasets, and even achieves comparable performance with some Retriever-Reader models fine-tuned on full training data.
arXiv.org Artificial Intelligence
May-16-2023
- Country:
- Asia
- China
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- Russia (0.04)
- Atlantic Ocean
- Black Sea (0.04)
- Mediterranean Sea > Aegean Sea
- Sea of Marmara > Dardanelles (0.05)
- Europe
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Russia (0.04)
- Ireland > Leinster
- North America
- Canada > British Columbia
- United States
- Massachusetts (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Washington > King County
- Seattle (0.14)
- Asia
- Genre:
- Personal (0.46)
- Research Report (0.70)
- Industry:
- Leisure & Entertainment (1.00)
- Media > Film (1.00)
- Technology: