Knowledge-Instruct: Effective Continual Pre-training from Limited Data using Instructions
Ovadia, Oded, Brief, Meni, Lemberg, Rachel, Sheetrit, Eitam
–arXiv.org Artificial Intelligence
While Large Language Models (LLMs) acquire vast knowledge during pre-training, they often lack domain-specific, new, or niche information. Continual pre-training (CPT) attempts to address this gap but suffers from catastrophic forgetting and inefficiencies in low-data regimes. We introduce Knowledge-Instruct, a novel approach to efficiently inject knowledge from limited corpora through pure instruction-tuning. By generating information-dense synthetic instruction data, it effectively integrates new knowledge while preserving general reasoning and instruction-following abilities. Knowledge-Instruct demonstrates superior factual memorization, minimizes catastrophic forgetting, and remains scalable by leveraging synthetic data from relatively small language models. Additionally, it enhances contextual understanding, including complex multi-hop reasoning, facilitating integration with retrieval systems. We validate its effectiveness across diverse benchmarks, including Companies, a new dataset that we release to measure knowledge injection capabilities.
arXiv.org Artificial Intelligence
Apr-9-2025
- Country:
- North America > United States > California (0.14)
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.93)
- Industry:
- Banking & Finance (0.93)
- Food & Agriculture > Agriculture (0.46)
- Technology: