ExplicitLM: Decoupling Knowledge from Parameters via Explicit Memory Banks
Yu, Chengzhang, Lu, Zening, Zheng, Chenyang, Wang, Chiyue, Zhang, Yiming, Jin, Zhanpeng
–arXiv.org Artificial Intelligence
Large language models (LLMs) universally suffer from knowledge staleness and lack of interpretability due to their implicit knowledge storage paradigm, where information is distributed across network parameters in an entangled, non-addressable manner. This fundamental limitation prevents targeted knowledge updates, verification of stored information, and understanding of model reasoning processes. We propose ExplicitLM, a novel architecture that fundamentally reimagines knowledge storage in language models through an explicit, interpretable memory bank system. Our key innovation introduces a million-scale external memory bank where each entry stores human-readable knowledge as token sequences, enabling direct inspection and modification of the model's knowledge base. To efficiently access this massive repository, we design a differentiable two-stage retrieval mechanism that enables end-to-end training while maintaining discrete knowledge selection, combining efficient coarse-grained filtering with product key decomposition (reducing computational complexity from O(N |I|) to O( N |I|)) and fine-grained similarity matching through Gumbel-Softmax. Drawing inspiration from dual-system cognitive theory, we partition knowledge into frozen explicit facts (20%) and learnable implicit patterns (80%), maintained through an Exponential Moving Average update strategy that ensures training stability.
arXiv.org Artificial Intelligence
Nov-4-2025
- Country:
- Asia > China
- Anhui Province > Hefei (0.04)
- Guangdong Province > Guangzhou (0.05)
- North America > United States (0.46)
- Asia > China
- Genre:
- Research Report (1.00)
- Industry:
- Government (0.68)
- Information Technology (0.46)
- Technology: