Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset

Zhang, Ruohan, Liu, Zhuode, Guan, Lin, Zhang, Luxin, Hayhoe, Mary M, Ballard, Dana H

arXiv.org Machine Learning 

Additionally, previous research has shown that and eye movements while playing Atari videos games. The given a task context, human visual attention is modulated dataset currently has 44 hours of gameplay data from 16 by reward [5, 9, 17]. In performing a familiar task, objects games and a total of 2.97 million demonstrated actions. Human with high potential reward or penalty attracts human attention subjects played games in a frame-by-frame manner to hence gaze indicates the momentary attentional priorities allow enough decision time in order to obtain near-optimal over multiple objects. Therefore the gaze could be a decisions. This dataset could be potentially used for research potentially useful intermediate learning signal for imitation in imitation learning, reinforcement learning, and learning.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found