Memory in humans and deep language models: Linking hypotheses for model augmentation
Raccah, Omri, Chen, Phoebe, Willke, Ted L., Poeppel, David, Vo, Vy A.
–arXiv.org Artificial Intelligence
The computational complexity of the self-attention mechanism in Transformer models significantly limits their ability to generalize over long temporal durations. Memory-augmentation, or the explicit storing of past information in external memory for subsequent predictions, has become a constructive avenue for mitigating this limitation. We argue that memory-augmented Transformers can benefit substantially from considering insights from the memory literature in humans. We detail an approach for integrating evidence from the human memory system through the specification of cross-domain linking hypotheses. We then provide an empirical demonstration to evaluate the use of surprisal as a linking hypothesis, and further identify the limitations of this approach to inform future research.
arXiv.org Artificial Intelligence
Nov-27-2022
- Country:
- Asia > Middle East
- Europe
- Austria > Vienna (0.14)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- Tyne and Wear > Sunderland (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.04)
- New York (0.05)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Washington > King County
- Seattle (0.04)
- California > San Francisco County
- Genre:
- Research Report
- Experimental Study (0.49)
- New Finding (0.70)
- Research Report
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.95)
- Technology: