LLMs as Repositories of Factual Knowledge: Limitations and Solutions
Mousavi, Seyed Mahed, Alghisi, Simone, Riccardi, Giuseppe
–arXiv.org Artificial Intelligence
LLMs' sources of knowledge are data snapshots containing factual information about entities collected at different timestamps and from different media types (e.g. wikis, social media, etc.). Such unstructured knowledge is subject to change due to updates through time from past to present. Equally important are the inconsistencies and inaccuracies occurring in different information sources. Consequently, the model's knowledge about an entity may be perturbed while training over the sequence of snapshots or at inference time, resulting in inconsistent and inaccurate model performance. In this work, we study the appropriateness of Large Language Models (LLMs) as repositories of factual knowledge. We consider twenty-four state-of-the-art LLMs that are either closed-, partially (weights), or fully (weight and training data) open-source. We evaluate their reliability in responding to time-sensitive factual questions in terms of accuracy and consistency when prompts are perturbed. We further evaluate the effectiveness of state-of-the-art methods to improve LLMs' accuracy and consistency. We then propose "ENtity-Aware Fine-tuning" (ENAF), a soft neurosymbolic approach aimed at providing a structured representation of entities during fine-tuning to improve the model's performance.
arXiv.org Artificial Intelligence
Jan-22-2025
- Country:
- Asia > Middle East
- Saudi Arabia (0.46)
- Europe (1.00)
- North America (0.68)
- South America (0.68)
- Asia > Middle East
- Genre:
- Research Report
- New Finding (0.46)
- Promising Solution (0.48)
- Research Report
- Industry:
- Automobiles & Trucks > Manufacturer (1.00)
- Energy > Oil & Gas (1.00)
- Government (1.00)
- Information Technology (0.93)
- Leisure & Entertainment > Sports
- Basketball (0.68)
- Motorsports > Formula One (0.46)
- Soccer (1.00)
- Technology: