Memory in humans and deep language models: Linking hypotheses for model augmentation
Raccah, Omri, Chen, Phoebe, Willke, Ted L., Poeppel, David, Vo, Vy A.
–arXiv.org Artificial Intelligence
The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
arXiv.org Artificial Intelligence
Nov-27-2022
- Country:
- Europe > United Kingdom
- England (0.28)
- North America > United States (1.00)
- Europe > United Kingdom
- Genre:
- Research Report
- Experimental Study (0.49)
- New Finding (0.70)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.95)
- Technology: