Discern and Answer: Mitigating the Impact of Misinformation in Retrieval-Augmented Models with Discriminators
Hong, Giwon, Kim, Jeonghwan, Kang, Junmo, Myaeng, Sung-Hyon, Whang, Joyce Jiyoung
–arXiv.org Artificial Intelligence
Most existing retrieval-augmented language models (LMs) for question answering assume all retrieved information is factually correct. In this work, we study a more realistic scenario in which retrieved documents may contain misinformation, causing conflicts among them. We observe that the existing models are highly brittle to such information in both fine-tuning and in-context few-shot learning settings. We propose approaches to make retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a discriminator or prompting to elicit discrimination capability in GPT-3. Our empirical results on open-domain question answering show that these approaches significantly improve LMs' robustness to knowledge conflicts. We also provide our findings on interleaving the fine-tuned model's decision with the in-context learning process, paving a new path to leverage the best of both worlds.
arXiv.org Artificial Intelligence
May-2-2023
- Country:
- Asia > Middle East (0.28)
- North America (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Technology: