Player goal recognition in digital games offers the promise of enabling games to dynamically customize player experience. Goal recognition aims to recognize players’ high-level intentions using a computational model trained on a player behavior corpus. A significant challenge is posed by devising reliable goal recognition models with a behavior corpus characterized by highly idiosyncratic player actions. In this paper, we introduce deep LSTM-based goal recognition models that handle the inherent uncertainty stemming from noisy, non-optimal player behaviors. Empirical evaluation indicates that deep LSTMs outperform competitive baselines including single-layer LSTMs, n-gram encoded feedforward neural networks, and Markov logic networks for a goal recognition corpus collected from an open-world educational game. In addition to metric-based goal recognition model evaluation, we investigate a visualization technique to show a dynamic goal recognition model’s performance over the course of a player’s goal-seeking behavior. Deep LSTMs, which are capable of both sequentially and hierarchically extracting salient features of player behaviors, show significant promise as a goal recognition approach for open-world digital games.
Min, Wookhee (North Carolina State University) | Mott, Bradford (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Taylor, Robert (North Carolina State University) | Wiebe, Eric (North Carolina State University) | Boyer, Kristy Elizabeth (University of Florida) | Lester, James (North Carolina State University)
Recent years have seen a growing interest in player modeling to create player-adaptive digital games. As a core player-modeling task, goal recognition aims to recognize players’ latent, high-level intentions in a non-invasive fashion to deliver goal-driven, tailored game experiences. This paper reports on an investigation of multimodal data streams that provide rich evidence about players’ goals. Two data streams, game event traces and player gaze traces, are utilized to devise goal recognition models from a corpus collected from an open-world serious game for science education. Empirical evaluations of 140 players’ trace data suggest that multimodal LSTM-based goal recognition models outperform competitive baselines, including unimodal LSTMs as well as multimodal and unimodal CRFs, with respect to predictive accuracy and early prediction. The results demonstrate that player gaze traces have the potential to significantly enhance goal recognition models’ performance.
Min, Wookhee (North Carolina State University) | Ha, Eun Young (North Carolina State University) | Rowe, Jonathan (North Carolina State University) | Mott, Bradford (North Carolina State University) | Lester, James (North Carolina State University)
While many open-ended digital games feature non-linear storylines and multiple solution paths, it is challenging for game developers to create effective game experiences in these settings due to the freedom given to the player. To address these challenges, goal recognition, a computational player-modeling task, has been investigated to enable digital games to dynamically predict players’ goals. This paper presents a goal recognition framework based on stacked denoising autoencoders, a variant of deep learning. The learned goal recognition models, which are trained from a corpus of player interactions, not only offer improved performance, but also offer the substantial advantage of eliminating the need for labor-intensive feature engineering. An evaluation demonstrates that the deep learning-based goal recognition framework significantly outperforms the previous state-of-the-art goal recognition approach based on Markov logic networks.
Baikadi, Alok (North Carolina State University) | Rowe, Jonathan P. (North Carolina State University) | Mott, Bradford W. (North Carolina State University) | Lester, James C. (North Carolina State University)
Computational models of goal recognition hold considerable promise for enhancing the capabilities of drama managers and director agents for interactive narratives. The problem of goal recognition, and its more general form plan recognition, has been the subject of extensive investigation in the AI community. However, there have been relatively few empirical investigations of goal recognition models in the intelligent narrative technologies community to date, and little is known about how computational models of interactive narrative can inform goal recognition. In this paper, we investigate a novel goal recognition model based on Markov Logic Networks (MLNs) that leverages narrative discovery events to enrich its representation of narrative state. An empirical evaluation shows that the enriched model outperforms a prior state-of-the-art MLN model in terms of accuracy, convergence rate, and the point of convergence.
Ha, Eun Young (North Carolina State University) | Rowe, Jonathan P. (North Carolina State University) | Mott, Bradford W. (North Carolina State University) | Lester, James C. (North Carolina State University)
Goal recognition is the task of inferring users’ goals from sequences of observed actions. By enabling player-adaptive digital games to dynamically adjust their behavior in concert with players’ changing goals, goal recognition can inform adaptive decision making for a broad range of entertainment, training, and education applications. This paper presents a goal recognition framework based on Markov logic networks (MLN). The model’s parameters are directly learned from a corpus of actions that was collected through player interactions with a non-linear educational game. An empirical evaluation demonstrates that the MLN goal recognition framework accurately predicts players’ goals in a game environment with multiple solution paths.