Baikadi, Alok (North Carolina State University) | Rowe, Jonathan P. (North Carolina State University) | Mott, Bradford W. (North Carolina State University) | Lester, James C. (North Carolina State University)
Computational models of goal recognition hold considerable promise for enhancing the capabilities of drama managers and director agents for interactive narratives. The problem of goal recognition, and its more general form plan recognition, has been the subject of extensive investigation in the AI community. However, there have been relatively few empirical investigations of goal recognition models in the intelligent narrative technologies community to date, and little is known about how computational models of interactive narrative can inform goal recognition. In this paper, we investigate a novel goal recognition model based on Markov Logic Networks (MLNs) that leverages narrative discovery events to enrich its representation of narrative state. An empirical evaluation shows that the enriched model outperforms a prior state-of-the-art MLN model in terms of accuracy, convergence rate, and the point of convergence.
One of the major themes to emerge in interactive narrative research is authorability and authorial intent. With interactive narratives, the human author is not present at run-time. Thus authoring interactive narratives is often a process of anticipating user actions in different contexts and using computational mechanisms and data structures for responding to the participant. Generative approaches to interactive narrative, in which an automated narrative generation system assumes some of the authoring responsibility, further decouple the human designer from the participants experience. We describe a general mechanism, called author goals, which can be used by human authors to assert authorial intent over generative narrative systems.
A key functionality provided by interactive narrative systems is narrative adaptation: tailoring story experiences in response to users’ actions and needs. We present a data-driven framework for dynamically tailoring events in interactive narratives using modular reinforcement learning. The framework involves decomposing an interactive narrative into multiple concurrent sub-problems, formalized as adaptable event sequences (AESs). Each AES is modeled as an independent Markov decision process (MDP). Policies for each MDP are induced using a corpus of user interaction data from an interactive narrative system with exploratory narrative adaptation policies. Rewards are computed based on users’ experiential outcomes. Conflicts between multiple policies are handled using arbitration procedures. In addition to introducing the framework, we describe a corpus of user interaction data from a testbed interactive narrative, CRYSTAL ISLAND, for inducing narrative adaptation policies. Empirical findings suggest that the framework can effectively shape users’ interactive narrative experiences.
Drama Managers, a specific type of the more general Experience Manager, have become a common subject of study in the interactive narrative literature. With a range of representational and computational approaches, authors have repeatedly developed techniques that enable computers to generate, reason about, and adapt narratives in an interactive virtual setting. In order to fully realize an experience manager, seven representational and computational problems need to be solved, generally on a case-by-case basis. In other words, the choice to use an Experience Manager is the choice to model the design as, and implement solutions to, seven inter-dependent design problems. We explicitly articulate those design problems and provide a number of examples of methods that both motivate the design problems as well as illustrate a range of approaches to solving them.
Recent years have witnessed growing interest in data-driven approaches to interactive narrative planning and drama management. Reinforcement learning techniques show particular promise because they can automatically induce and refine models for tailoring game events by optimizing reward functions that explicitly encode interactive narrative experiences’ quality. Due to the inherently subjective nature of interactive narrative experience, designing effective reward functions is challenging. In this paper, we investigate the impacts of alternate formulations of reward in a reinforcement learning-based interactive narrative planner for the Crystal Island game environment. We formalize interactive narrative planning as a modular reinforcement-learning (MRL) problem. By decomposing interactive narrative planning into multiple independent sub-problems, MRL enables efficient induction of interactive narrative policies directly from a corpus of human players’ experience data. Empirical analyses suggest that interactive narrative policies induced with MRL are likely to yield better player outcomes than heuristic or baseline policies. Furthermore, we observe that MRL-based interactive narrative planners are robust to alternate reward discount parameterizations.