Knowledge Extraction on Semi-Structured Content: Does It Remain Relevant for Question Answering in the Era of LLMs?
Sun, Kai, Huang, Yin, Mehra, Srishti, Kachuee, Mohammad, Chen, Xilun, Tao, Renjie, Lin, Zhaojiang, Jessee, Andrea, Shah, Nirav, Betty, Alex, Liu, Yue, Kumar, Anuj, Yih, Wen-tau, Dong, Xin Luna
–arXiv.org Artificial Intelligence
The advent of Large Language Models (LLMs) has significantly advanced web-based Question Answering (QA) systems over semi-structured content, raising questions about the continued utility of knowledge extraction for question answering. This paper investigates the value of triple extraction in this new paradigm by extending an existing benchmark with knowledge extraction annotations and evaluating commercial and open-source LLMs of varying sizes. Our results show that web-scale knowledge extraction remains a challenging task for LLMs. Despite achieving high QA accuracy, LLMs can still benefit from knowledge extraction, through augmentation with extracted triples and multi-task learning. These findings provide insights into the evolving role of knowledge triple extraction in web-based QA and highlight strategies for maximizing LLM effectiveness across different model sizes and resource settings.
arXiv.org Artificial Intelligence
Sep-30-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Indonesia > Bali (0.04)
- Middle East
- Jordan (0.04)
- Saudi Arabia > Asir Province
- Abha (0.04)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- China > Beijing
- Europe > Monaco (0.04)
- North America
- Canada > British Columbia
- Dominican Republic (0.04)
- United States
- Minnesota > Hennepin County
- Minneapolis (0.14)
- New York > New York County
- New York City (0.14)
- Washington > King County
- Seattle (0.04)
- Minnesota > Hennepin County
- Oceania > Australia
- Queensland (0.04)
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: