EntGPT: Linking Generative Large Language Models with Knowledge Bases
Ding, Yifan, Poudel, Amrit, Zeng, Qingkai, Weninger, Tim, Veeramani, Balaji, Bhattacharya, Sanmitra
–arXiv.org Artificial Intelligence
The ability of Large Language Models (LLMs) to generate factually correct output remains relatively unexplored due to the lack of fact-checking and knowledge grounding during training and inference. In this work, we aim to address this challenge through the Entity Disambiguation (ED) task. We first consider prompt engineering, and design a three-step hard-prompting method to probe LLMs' ED performance without supervised fine-tuning (SFT). Overall, the prompting method improves the micro-F_1 score of the original vanilla models by a large margin, on some cases up to 36% and higher, and obtains comparable performance across 10 datasets when compared to existing methods with SFT. We further improve the knowledge grounding ability through instruction tuning (IT) with similar prompts and responses. The instruction-tuned model not only achieves higher micro-F1 score performance as compared to several baseline methods on supervised entity disambiguation tasks with an average micro-F_1 improvement of 2.1% over the existing baseline models, but also obtains higher accuracy on six Question Answering (QA) tasks in the zero-shot setting. Our methodologies apply to both open- and closed-source LLMs.
arXiv.org Artificial Intelligence
Feb-9-2024
- Country:
- Asia > Middle East
- Palestine > Gaza Strip > Gaza Governorate > Gaza (0.14)
- Europe (1.00)
- North America > United States
- New York > New York County > New York City (0.28)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (1.00)
- Government
- Military (0.68)
- Regional Government (1.00)
- Leisure & Entertainment > Sports
- Football (0.46)
- Technology: