Goto

Collaborating Authors

Eye Movement Feature Classification for Soccer Expertise Identification in Virtual Reality

arXiv.org Artificial Intelligence

Latest research in expertise assessment of soccer players pronounced the importance of perceptual skills. Former research focused either on high experimental control or natural presentation mode. To assess perceptual skills of athletes, in an optimized manner, we captured omnidirectional in-field scenes, showed to 12 expert, 9 intermediate and 13 novice goalkeepers from soccer on virtual reality glasses. All scenes where shown from the same natural goalkeeper perspective and ended after the return pass to the goalkeeper. Based on their responses and gaze behavior we classified their expertise with common machine learning techniques. This pilot study shows promising results for objective classification of goalkeepers expertise based on their gaze behaviour.


Deep semantic gaze embedding and scanpath comparison for expertise classification during OPT viewing

arXiv.org Machine Learning

Modeling eye movement indicative of expertise behavior is decisive in user evaluation. However, it is indisputable that task semantics affect gaze behavior. We present a novel approach to gaze scanpath comparison that incorporates convolutional neural networks (CNN) to process scene information at the fixation level. Image patches linked to respective fixations are used as input for a CNN and the resulting feature vectors provide the temporal and spatial gaze information necessary for scanpath similarity comparison.We evaluated our proposed approach on gaze data from expert and novice dentists interpreting dental radiographs using a local alignment similarity score. Our approach was capable of distinguishing experts from novices with 93% accuracy while incorporating the image semantics. Moreover, our scanpath comparison using image patch features has the potential to incorporate task semantics from a variety of tasks


Learning to Predict Intent from Gaze During Robotic Hand-Eye Coordination

AAAI Conferences

Effective human-aware robots should anticipate their user’s intentions. During hand-eye coordination tasks, gaze often precedes hand motion and can serve as a powerful predictor for intent. However, cooperative tasks where a semi-autonomous robot serves as an extension of the human hand have rarely been studied in the context of hand-eye coordination. We hypothesize that accounting for anticipatory eye movements in addition to the movements of the robot will improve intent estimation. This research compares the application of various machine learning methods to intent prediction from gaze tracking data during robotic hand-eye coordination tasks. We found that with proper feature selection, accuracies exceeding 94% and AUC greater than 91% are achievable with several classification algorithms but that anticipatory gaze data did not improve intent prediction.


End-to-End Models for the Analysis of System 1 and System 2 Interactions based on Eye-Tracking Data

arXiv.org Machine Learning

While theories postulating a dual cognitive system take hold, quantitative confirmations are still needed to understand and identify interactions between the two systems or conflict events. Eye movements are among the most direct markers of the individual attentive load and may serve as an important proxy of information. In this work we propose a computational method, within a modified visual version of the well-known Stroop test, for the identification of different tasks and potential conflicts events between the two systems through the collection and processing of data related to eye movements. A statistical analysis shows that the selected variables can characterize the variation of attentive load within different scenarios. Moreover, we show that Machine Learning techniques allow to distinguish between different tasks with a good classification accuracy and to investigate more in depth the gaze dynamics.


Fully Convolutional Neural Networks for Raw Eye Tracking Data Segmentation, Generation, and Reconstruction

arXiv.org Machine Learning

In this paper, we use fully convolutional neural networks for the semantic segmentation of eye tracking data. We also use these networks for reconstruction, and in conjunction with a variational auto-encoder to generate eye movement data. The first improvement of our approach is that no input window is necessary, due to the use of fully convolutional networks and therefore any input size can be processed directly. The second improvement is that the used and generated data is raw eye tracking data (position X, Y and time) without preprocessing. This is achieved by pre-initializing the filters in the first layer and by building the input tensor along the z axis. We evaluated our approach on three publicly available datasets and compare the results to the state of the art.