Data Doping or True Intelligence? Evaluating the Transferability of Injected Knowledge in LLMs
Jan, Essa, Ali, Moiz, Hassan, Muhammad Saram, Zaffar, Fareed, Zaki, Yasir
–arXiv.org Artificial Intelligence
As the knowledge of large language models (LLMs) becomes outdated over time, there is a growing need for efficient methods to update them, especially when injecting proprietary information. Our study reveals that comprehension-intensive fine-tuning tasks (e.g., question answering and blanks) achieve substantially higher knowledge retention rates (48%) compared to mapping-oriented tasks like translation (17%) or text-to-JSON conversion (20%), despite exposure to identical factual content. We demonstrate that this pattern persists across model architectures and follows scaling laws, with larger models showing improved retention across all task types. However, all models exhibit significant performance drops when applying injected knowledge in broader contexts, suggesting limited semantic integration. These findings show the importance of task selection in updating LLM knowledge, showing that effective knowledge injection relies not just on data exposure but on the depth of cognitive engagement during fine-tuning.
arXiv.org Artificial Intelligence
May-26-2025
- Country:
- Africa > South Africa (0.05)
- Asia
- India (0.04)
- Middle East
- Syria (0.14)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.14)
- Pakistan > Punjab
- Lahore Division > Lahore (0.04)
- Russia (0.04)
- Europe
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- Russia (0.04)
- Croatia > Dubrovnik-Neretva County
- North America
- Canada (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Arizona (0.04)
- California > Los Angeles County (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Michigan (0.04)
- Nevada (0.04)
- North Carolina (0.04)
- Pennsylvania (0.05)
- Wisconsin (0.04)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: