Improving Passage Retrieval with Zero-Shot Question Generation
Sachan, Devendra Singh, Lewis, Mike, Joshi, Mandar, Aghajanyan, Armen, Yih, Wen-tau, Pineau, Joelle, Zettlemoyer, Luke
–arXiv.org Artificial Intelligence
Queries and documents of query scoring with count-based language are typically embedded in a shared representation models (Zhai and Lafferty, 2001). However, instead space to enable efficient search, before using of estimating a language model from each a task-specific model to perform a deeper, tokenlevel passage, UPR uses pre-trained language models document analysis (e.g. a document reader (PLMs). More recent work on re-rankers have finetuned that selects an answer span). We show that adding PLMs on question-passage pairs to generate a zero-shot re-ranker to the retrieval stage of such relevance labels (Nogueira et al., 2020), sometimes models leads to large gains in performance, by doing to jointly generate question and relevance deep token-level analysis with no task-specific labels (Nogueira dos Santos et al., 2020; Ju et al., data or tuning.
arXiv.org Artificial Intelligence
Apr-2-2023
- Country:
- North America > United States (1.00)
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment > Sports
- Baseball (1.00)
- Media > Film (0.93)
- Leisure & Entertainment > Sports
- Technology: