Goto

Collaborating Authors

 interactive narrative planner


Rowe

AAAI Conferences

Recent years have witnessed growing interest in data-driven approaches to interactive narrative planning and drama management. Reinforcement learning techniques show particular promise because they can automatically induce and refine models for tailoring game events by optimizing reward functions that explicitly encode interactive narrative experiences' quality. Due to the inherently subjective nature of interactive narrative experience, designing effective reward functions is challenging. In this paper, we investigate the impacts of alternate formulations of reward in a reinforcement learning-based interactive narrative planner for the Crystal Island game environment.


Wang

AAAI Conferences

A common feature of data-driven interactive narrative planning methods is that an enormous amount of training data is required, which is rarely available and expensive to collect from observations of human players. An alternative approach to obtaining data is to generate synthetic data from simulated players. In this paper, we present a long short-term memory (LSTM) neural network framework for simulating players to train data-driven interactive narrative planners. By leveraging a small amount of previously collected human player interaction data, we devise a generative player simulation model. A multi-task neural network architecture is proposed to estimate player actions and experiential outcomes from a single model. Empirical results demonstrate that the bipartite LSTM network produces the better-performing player action prediction models than several baseline techniques, and the multi-task LSTM derives comparable player outcome prediction models within a shorter training time. We also find that synthetic data from the player simulation model contributes to training more effective interactive narrative planners than raw human player data alone.