Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents
Wheeler, Schaun, Jeunen, Olivier
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) represent a landmark achievement in Artificial Intelligence (AI), demonstrating unprecedented proficiency in procedural tasks such as text generation, code completion, and conversational coherence. These capabilities stem from their architecture, which mirrors human procedural memory -- the brain's ability to automate repetitive, pattern-driven tasks through practice. However, as LLMs are increasingly deployed in real-world applications, it becomes impossible to ignore their limitations operating in complex, unpredictable environments. This paper argues that LLMs, while transformative, are fundamentally constrained by their reliance on procedural memory. To create agents capable of navigating ``wicked'' learning environments -- where rules shift, feedback is ambiguous, and novelty is the norm -- we must augment LLMs with semantic memory and associative learning systems. By adopting a modular architecture that decouples these cognitive functions, we can bridge the gap between narrow procedural expertise and the adaptive intelligence required for real-world problem-solving.
arXiv.org Artificial Intelligence
May-7-2025
- Country:
- Asia > Japan
- Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Europe
- Belgium > Flanders
- Antwerp Province > Antwerp (0.04)
- France (0.04)
- Belgium > Flanders
- North America > United States
- New York
- Bronx County > New York City (0.05)
- Kings County > New York City (0.05)
- New York County > New York City (0.15)
- Queens County > New York City (0.05)
- Richmond County > New York City (0.05)
- North Carolina (0.04)
- New York
- Asia > Japan
- Genre:
- Research Report (0.70)
- Industry:
- Health & Medicine
- Consumer Health (0.88)
- Therapeutic Area > Neurology (0.88)
- Health & Medicine
- Technology: