Humans and language models diverge when predicting repeating text
Vaidya, Aditya R., Turek, Javier, Huth, Alexander G.
–arXiv.org Artificial Intelligence
Language models that are trained on the next-word prediction task have been shown to accurately model human behavior in word prediction and reading speed. In contrast with these findings, we present a scenario in which the performance of humans and LMs diverges. We collected a dataset of human next-word predictions for five stimuli that are formed by repeating spans of text. Human and GPT-2 LM predictions are strongly aligned in the first presentation of a text span, but their performance quickly diverges when memory (or in-context learning) begins to play a role. We traced the cause of this divergence to specific attention heads in a middle layer. Adding a power-law recency bias to these attention heads yielded a model that performs much more similarly to humans. We hope that this scenario will spur future work in bringing LMs closer to human behavior.
arXiv.org Artificial Intelligence
Oct-22-2023
- Country:
- Europe
- North America > United States
- New York > New York County
- New York City (0.04)
- Texas (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- New York > New York County
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.68)
- Technology: