Modeling Task Effects in Human Reading with Neural Network-based Attention
–arXiv.org Artificial Intelligence
Research on human reading has long documented that reading behavior shows task-specific effects, but it has been challenging to build general models predicting what reading behavior humans will show in a given task. We introduce NEAT, a computational model of the allocation of attention in human reading, based on the hypothesis that human reading optimizes a tradeoff between economy of attention and success at a task. Our model is implemented using contemporary neural network modeling techniques, and makes explicit and testable predictions about how the allocation of attention varies across different tasks. We test this in an eyetracking study comparing two versions of a reading comprehension task, finding that our model successfully accounts for reading behavior across the tasks. Our work thus provides evidence that task effects can be modeled as optimal adaptation to task demands.
arXiv.org Artificial Intelligence
Sep-16-2022
- Country:
- Europe > Belgium
- Brussels-Capital Region > Brussels (0.04)
- North America
- Canada
- Ontario > National Capital Region
- Ottawa (0.04)
- Quebec > Montreal (0.04)
- Ontario > National Capital Region
- United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Michigan (0.04)
- New York (0.04)
- Texas > Travis County
- Austin (0.04)
- Massachusetts > Middlesex County
- Canada
- Europe > Belgium
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Education (1.00)
- Health & Medicine > Therapeutic Area
- Neurology (0.67)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning
- Learning Graphical Models > Directed Networks
- Bayesian Learning (0.92)
- Neural Networks > Deep Learning (1.00)
- Reinforcement Learning (1.00)
- Statistical Learning (1.00)
- Learning Graphical Models > Directed Networks
- Natural Language (1.00)
- Representation & Reasoning > Uncertainty
- Bayesian Inference (0.92)
- Vision (1.00)
- Information Technology > Artificial Intelligence