Lester, James
Modeling Player Engagement with Bayesian Hierarchical Models
Sawyer, Robert (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Azevedo, Roger (University of Central Florida) | Lester, James (North Carolina State University)
Modeling player engagement is a key challenge in games. However, the gameplay signatures of engaged players can be highly context-sensitive, varying based on where the game is used or what population of players is using it. Traditionally, models of player engagement are investigated in a particular context, and it is unclear how effectively these models generalize to other settings and populations. In this work, we investigate a Bayesian hierarchical linear model for multi-task learning to devise a model of player engagement from a pair of datasets that were gathered in two complementary contexts: a Classroom Study with middle school students and a Laboratory Study with undergraduate students. Both groups of players used similar versions of Crystal Island, an educational interactive narrative game for science learning. Results indicate that the Bayesian hierarchical model outperforms both pooled and context-specific models in cross-validation measures of predicting player motivation from in-game behaviors, particularly for the smaller Classroom Study group. Further, we find that the posterior distributions of model parameters indicate that the coefficient for a measure of gameplay performance significantly differs between groups. Drawing upon their capacity to share information across groups, hierarchical Bayesian methods provide an effective approach for modeling player engagement with data from similar, but different, contexts.
Simulating Player Behavior for Data-Driven Interactive Narrative Personalization
Wang, Pengcheng (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Min, Wookhee (North Carolina State University) | Mott, Bradford (North Carolina State University) | Lester, James (North Carolina State University)
Data-driven approaches to interactive narrative personalization show significant promise for applications in entertainment, training, and education. A common feature of data-driven interactive narrative planning methods is that an enormous amount of training data is required, which is rarely available and expensive to collect from observations of human players. An alternative approach to obtaining data is to generate synthetic data from simulated players. In this paper, we present a long short-term memory (LSTM) neural network framework for simulating players to train data-driven interactive narrative planners. By leveraging a small amount of previously collected human player interaction data, we devise a generative player simulation model. A multi-task neural network architecture is proposed to estimate player actions and experiential outcomes from a single model. Empirical results demonstrate that the bipartite LSTM network produces the better-performing player action prediction models than several baseline techniques, and the multi-task LSTM derives comparable player outcome prediction models within a shorter training time. We also find that synthetic data from the player simulation model contributes to training more effective interactive narrative planners than raw human player data alone.
Multimodal Goal Recognition in Open-World Digital Games
Min, Wookhee (North Carolina State University) | Mott, Bradford (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Taylor, Robert (North Carolina State University) | Wiebe, Eric (North Carolina State University) | Boyer, Kristy Elizabeth (University of Florida) | Lester, James (North Carolina State University)
Recent years have seen a growing interest in player modeling to create player-adaptive digital games. As a core player-modeling task, goal recognition aims to recognize players’ latent, high-level intentions in a non-invasive fashion to deliver goal-driven, tailored game experiences. This paper reports on an investigation of multimodal data streams that provide rich evidence about players’ goals. Two data streams, game event traces and player gaze traces, are utilized to devise goal recognition models from a corpus collected from an open-world serious game for science education. Empirical evaluations of 140 players’ trace data suggest that multimodal LSTM-based goal recognition models outperform competitive baselines, including unimodal LSTMs as well as multimodal and unimodal CRFs, with respect to predictive accuracy and early prediction. The results demonstrate that player gaze traces have the potential to significantly enhance goal recognition models’ performance.
Deep LSTM-Based Goal Recognition Models for Open-World Digital Games
Min, Wookhee (North Carolina State University) | Mott, Bradford (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Lester, James (North Carolina State University)
Player goal recognition in digital games offers the promise of enabling games to dynamically customize player experience. Goal recognition aims to recognize players’ high-level intentions using a computational model trained on a player behavior corpus. A significant challenge is posed by devising reliable goal recognition models with a behavior corpus characterized by highly idiosyncratic player actions. In this paper, we introduce deep LSTM-based goal recognition models that handle the inherent uncertainty stemming from noisy, non-optimal player behaviors. Empirical evaluation indicates that deep LSTMs outperform competitive baselines including single-layer LSTMs, n-gram encoded feedforward neural networks, and Markov logic networks for a goal recognition corpus collected from an open-world educational game. In addition to metric-based goal recognition model evaluation, we investigate a visualization technique to show a dynamic goal recognition model’s performance over the course of a player’s goal-seeking behavior. Deep LSTMs, which are capable of both sequentially and hierarchically extracting salient features of player behaviors, show significant promise as a goal recognition approach for open-world digital games.
A Generalized Multidimensional Evaluation Framework for Player Goal Recognition
Min, Wookhee (North Carolina State University) | Baikadi, Alok (University of Pittsburgh) | Mott, Bradford (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Liu, Barry (North Carolina State University) | Ha, Eun Young (IBM) | Lester, James (North Carolina State University)
Recent years have seen a growing interest in player modeling, which supports the creation of player-adaptive digital games. A central problem of player modeling is goal recognition, which aims to recognize players’ intentions from observable gameplay behaviors. Player goal recognition offers the promise of enabling games to dynamically adjust challenge levels, perform procedural content generation, and create believable NPC interactions. A growing body of work is investigating a wide range of machine learning-based goal recognition models. In this paper, we introduce GOALIE, a multidimensional framework for evaluating player goal recognition models. The framework integrates multiple metrics for player goal recognition models, including two novel metrics, n-early convergence rate and standardized convergence point . We demonstrate the application of the GOALIE framework with the evaluation of several player goal recognition models, including Markov logic network-based, deep feedforward neural network-based, and long short-term memory network-based goal recognizers on two different educational games. The results suggest that GOALIE effectively captures goal recognition behaviors that are key to next-generation player modeling.
Deep Learning-Based Goal Recognition in Open-Ended Digital Games
Min, Wookhee (North Carolina State University) | Ha, Eun Young (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Mott, Bradford (North Carolina State University) | Lester, James (North Carolina State University)
While many open-ended digital games feature non-linear storylines and multiple solution paths, it is challenging for game developers to create effective game experiences in these settings due to the freedom given to the player. To address these challenges, goal recognition, a computational player-modeling task, has been investigated to enable digital games to dynamically predict players’ goals. This paper presents a goal recognition framework based on stacked denoising autoencoders, a variant of deep learning. The learned goal recognition models, which are trained from a corpus of player interactions, not only offer improved performance, but also offer the substantial advantage of eliminating the need for labor-intensive feature engineering. An evaluation demonstrates that the deep learning-based goal recognition framework significantly outperforms the previous state-of-the-art goal recognition approach based on Markov logic networks.
Optimizing Player Experience in Interactive Narrative Planning: A Modular Reinforcement Learning Approach
Rowe, Jonathan (North Carolina State University) | Mott, Bradford (North Carolina State University) | Lester, James (North Carolina State University)
Recent years have witnessed growing interest in data-driven approaches to interactive narrative planning and drama management. Reinforcement learning techniques show particular promise because they can automatically induce and refine models for tailoring game events by optimizing reward functions that explicitly encode interactive narrative experiences’ quality. Due to the inherently subjective nature of interactive narrative experience, designing effective reward functions is challenging. In this paper, we investigate the impacts of alternate formulations of reward in a reinforcement learning-based interactive narrative planner for the Crystal Island game environment. We formalize interactive narrative planning as a modular reinforcement-learning (MRL) problem. By decomposing interactive narrative planning into multiple independent sub-problems, MRL enables efficient induction of interactive narrative policies directly from a corpus of human players’ experience data. Empirical analyses suggest that interactive narrative policies induced with MRL are likely to yield better player outcomes than heuristic or baseline policies. Furthermore, we observe that MRL-based interactive narrative planners are robust to alternate reward discount parameterizations.
Reports of the AAAI 2011 Fall Symposia
Blisard, Sam (Naval Research Laboratory) | Carmichael, Ted (University of North Carolina at Charlotte) | Ding, Li (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Frost, Wende (Naval Research Laboratory) | Graesser, Arthur (University of Memphis) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Kagal, Lalana (Massachusetts Institute of Technology) | Kruijff, Geert-Jan M. (German Research Center for Artificial Intelligence) | Langley, Pat (Arizona State University) | Lester, James (North Carolina State University) | McGuinness, Deborah L. (Rensselaer Polytechnic Institute) | Mostow, Jack (Carnegie Mellon University) | Papadakis, Panagiotis (University of Sapienza, Rome) | Pirri, Fiora (Sapienza University of Rome) | Prasad, Rashmi (University of Wisconsin-Milwaukee) | Stoyanchev, Svetlana (Columbia University) | Varakantham, Pradeep (Singapore Management University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2011 Fall Symposium Series, held Friday through Sunday, November 4–6, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia are as follows: (1) Advances in Cognitive Systems; (2) Building Representations of Common Ground with Intelligent Agents; (3) Complex Adaptive Systems: Energy, Information and Intelligence; (4) Multiagent Coordination under Uncertainty; (5) Open Government Knowledge: AI Opportunities and Challenges; (6) Question Generation; and (7) Robot-Human Teamwork in Dynamic Adverse Environment. The highlights of each symposium are presented in this report.
Reports of the AAAI 2011 Fall Symposia
Blisard, Sam (Naval Research Laboratory) | Carmichael, Ted (University of North Carolina at Charlotte) | Ding, Li (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Frost, Wende (Naval Research Laboratory) | Graesser, Arthur (University of Memphis) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Kagal, Lalana (Massachusetts Institute of Technology) | Kruijff, Geert-Jan M. (German Research Center for Artificial Intelligence) | Langley, Pat (Arizona State University) | Lester, James (North Carolina State University) | McGuinness, Deborah L. (Rensselaer Polytechnic Institute) | Mostow, Jack (Carnegie Mellon University) | Papadakis, Panagiotis (University of Sapienza, Rome) | Pirri, Fiora (Sapienza University of Rome) | Prasad, Rashmi (University of Wisconsin-Milwaukee) | Stoyanchev, Svetlana (Columbia University) | Varakantham, Pradeep (Singapore Management University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2011 Fall Symposium Series, held Friday through Sunday, November 4–6, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia are as follows: (1) Advances in Cognitive Systems; (2) Building Representations of Common Ground with Intelligent Agents; (3) Complex Adaptive Systems: Energy, Information and Intelligence; (4) Multiagent Coordination under Uncertainty; (5) Open Government Knowledge: AI Opportunities and Challenges; (6) Question Generation; and (7) Robot-Human Teamwork in Dynamic Adverse Environment. The highlights of each symposium are presented in this report.
Learning Director Agent Strategies: An Inductive Framework for Modeling Director Agents
Lee, Seung (North Carolina State University) | Mott, Bradford (North Carolina State University) | Lester, James (North Carolina State University)
Interactive narrative environments offer significant potential for creating engaging narrative experiences that are tailored to individual users. Increasingly, applications in education, training, and entertainment are leveraging narrative to create rich interactive experiences in virtual storyworlds. A key challenge posed by these environments is devising accurate models of director agents’ strategies that determine the most appropriate director action to perform for crafting customized story experiences. A promising approach is developing an empirically informed model of director agents’ decision-making strategies. In this paper, we propose a framework for learning models of director agent decision-making strategies by observing human-human interactions in an interactive narrative-centered learning environment. The results are encouraging and suggest that creating empirically driven models of director agent decision-making is a promising approach to interactive narrative.