Modeling Player Engagement with Bayesian Hierarchical Models

AAAI Conferences

Modeling player engagement is a key challenge in games. However, the gameplay signatures of engaged players can be highly context-sensitive, varying based on where the game is used or what population of players is using it. Traditionally, models of player engagement are investigated in a particular context, and it is unclear how effectively these models generalize to other settings and populations. In this work, we investigate a Bayesian hierarchical linear model for multi-task learning to devise a model of player engagement from a pair of datasets that were gathered in two complementary contexts: a Classroom Study with middle school students and a Laboratory Study with undergraduate students. Both groups of players used similar versions of Crystal Island, an educational interactive narrative game for science learning. Results indicate that the Bayesian hierarchical model outperforms both pooled and context-specific models in cross-validation measures of predicting player motivation from in-game behaviors, particularly for the smaller Classroom Study group. Further, we find that the posterior distributions of model parameters indicate that the coefficient for a measure of gameplay performance significantly differs between groups. Drawing upon their capacity to share information across groups, hierarchical Bayesian methods provide an effective approach for modeling player engagement with data from similar, but different, contexts.


Colonial Beach Teen Tops in State With Rubik's Cube

U.S. News

The son of Paul Christie and Sonya Stagnoli, Ben and his sister Bella are home-schooled students who also take college courses. He'll graduate with an associate's degree from Germanna Community College next spring, at about the same time that he receives his high school diploma. She takes classes at Rappahannock Community College.


The reclusive inventor of the Rubik's Cube wants to do more than amuse you

Popular Science

For those outside the fold, the Rubik's cube is cognitive kryptonite. Until this week, I'd certainly never solved one. Even now, saying that I solved a Rubik's cube feels like a grievous overstatement of my accomplishments. The truth is that we--a patient pre-teen "cuber" whose solve time is 47 seconds, her slightly-less-patient middle school teacher (whose solve time, she's embarrassed to admit, is closer to a minute and a half), and me--completed a cube together. The site of my public humiliation could not have been more incongruous with the task at hand.


Game-Related Examples of Artificial Intelligence

AAAI Conferences

The field of artificial intelligence needs to attract new researchers to the field to continue current explorations and look for novel approaches to tomorrow's problems. One approach involves providing students with learning tools that excite their imagination and help them obtain an appreciation for what artificial intelligence can do. The tools described here are used in an undergraduate course at Sam Houston State University. They include heuristic-driven search in a potential game's terrain map, reinforcement learning in a tank battle game, and game tree search techniques in tictac-toe.


Online Learning and Planning in Partially Observable Domains without Prior Knowledge

arXiv.org Artificial Intelligence

How an agent can act optimally in stochastic, partially observable domains is a challenge problem, the standard approach to address this issue is to learn the domain model firstly and then based on the learned model to find the (near) optimal policy. However, offline learning the model often needs to store the entire training data and cannot utilize the data generated in the planning phase. Furthermore, current research usually assumes the learned model is accurate or presupposes knowledge of the nature of the unobservable part of the world. In this paper, for systems with discrete settings, with the benefits of Predictive State Representations~(PSRs), a model-based planning approach is proposed where the learning and planning phases can both be executed online and no prior knowledge of the underlying system is required. Experimental results show compared to the state-of-the-art approaches, our algorithm achieved a high level of performance with no prior knowledge provided, along with theoretical advantages of PSRs. Source code is available at https://github.com/DMU-XMU/PSR-MCTS-Online.