Goto

Collaborating Authors

 player action


WhatELSE: Shaping Narrative Spaces at Configurable Level of Abstraction for AI-bridged Interactive Storytelling

Lu, Zhuoran, Zhou, Qian, Wang, Yi

arXiv.org Artificial Intelligence

Generative AI significantly enhances player agency in interactive narratives (IN) by enabling just-in-time content generation that adapts to player actions. While delegating generation to AI makes IN more interactive, it becomes challenging for authors to control the space of possible narratives - within which the final story experienced by the player emerges from their interaction with AI. In this paper, we present WhatELSE, an AI-bridged IN authoring system that creates narrative possibility spaces from example stories. WhatELSE provides three views (narrative pivot, outline, and variants) to help authors understand the narrative space and corresponding tools leveraging linguistic abstraction to control the boundaries of the narrative space. Taking innovative LLM-based narrative planning approaches, WhatELSE further unfolds the narrative space into executable game events. Through a user study (N=12) and technical evaluations, we found that WhatELSE enables authors to perceive and edit the narrative space and generates engaging interactive narratives at play-time.


Efficient Monte Carlo Counterfactual Regret Minimization in Games with Many Player Actions

Neural Information Processing Systems

Counterfactual Regret Minimization (CFR) is a popular, iterative algorithm for computing strategies in extensive-form games. The Monte Carlo CFR (MCCFR) variants reduce the per iteration time cost of CFR by traversing a smaller, sampled portion of the tree. The previous most effective instances of MCCFR can still be very slow in games with many player actions since they sample every action for a given player. In this paper, we present a new MCCFR algorithm, Average Strategy Sampling (AS), that samples a subset of the player's actions according to the player's average strategy. Our new algorithm is inspired by a new, tighter bound on the number of iterations required by CFR to converge to a given solution quality. In addition, we prove a similar, tighter bound for AS and other popular MCCFR variants.


I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons

Zhou, Pei, Zhu, Andrew, Hu, Jennifer, Pujara, Jay, Ren, Xiang, Callison-Burch, Chris, Choi, Yejin, Ammanabrolu, Prithviraj

arXiv.org Artificial Intelligence

We propose a novel task, G4C, to study teacher-student natural language interactions in a goal-driven and grounded environment. Dungeons and Dragons (D&D), a role-playing game, provides an ideal setting to investigate such interactions. Here, the Dungeon Master (DM), i.e., the teacher, guides the actions of several players -- students, each with their own personas and abilities -- to achieve shared goals grounded in a fantasy world. Our approach is to decompose and model these interactions into (1) the DM's intent to guide players toward a given goal; (2) the DM's guidance utterance to the players expressing this intent; and (3) a theory-of-mind (ToM) model that anticipates the players' reaction to the guidance one turn into the future. We develop a novel reinforcement learning (RL) method for training a DM that generates guidance for players by rewarding utterances where the intent matches the ToM-anticipated player actions. Human and automated evaluations show that a DM trained to explicitly model intents and incorporate ToM of the players using RL generates better-quality guidance that is 3x more likely to fulfill the DM's intent than a vanilla natural language generation (NLG) approach.


Leveraging Cluster Analysis to Understand Educational Game Player Experiences and Support Design

Swanson, Luke, Gagnon, David, Scianna, Jennifer, McCloskey, John, Spevacek, Nicholas, Slater, Stefan, Harpstead, Erik

arXiv.org Artificial Intelligence

Luke Swanson, Field Day Lab, University of Wisconsin-Madison David Gagnon, Field Day Lab, University of Wisconsin-Madison Jennifer Scianna, Field Day Lab, University of Wisconsin-Madison John McCloskey, Field Day Lab, University of Wisconsin-Madison Nicholas Spevacek, Field Day Lab, University of Wisconsin-Madison Stefan Slater, Graduate School of Education, University of Pennsylvania Erik Harpstead, Human-Computer Interaction Institute, Carnegie Mellon University Abstract: The ability for an educational game designer to understand their audience's play styles and resulting experience is an essential tool for improving their game's design. As a game is subjected to large-scale player testing, the designers require inexpensive, automated methods for categorizing patterns of player-game interactions. In this paper we present a simple, reusable process using best practices for data clustering, feasible for use within a small educational game studio. We utilize the method to analyze a real-time strategy game, processing game telemetry data to determine categories of players based on their in-game actions, the feedback they received, and their progress through the game. Introduction Playtesting is a well-adopted method for iteratively testing and improving educational games. As a game moves through development phases, members of the target audience are given versions of the game to play, and in exchange generate feedback. This feedback can then be used to validate the design decisions made during the game's development, and to direct the next iterations of work.


ESTA: An Esports Trajectory and Action Dataset

Xenopoulos, Peter, Silva, Claudio

arXiv.org Artificial Intelligence

Sports, due to their global reach and impact-rich prediction tasks, are an exciting domain to deploy machine learning models. However, data from conventional sports is often unsuitable for research use due to its size, veracity, and accessibility. To address these issues, we turn to esports, a growing domain that encompasses video games played in a capacity similar to conventional sports. Since esports data is acquired through server logs rather than peripheral sensors, esports provides a unique opportunity to obtain a massive collection of clean and detailed spatiotemporal data, similar to those collected in conventional sports. To parse esports data, we develop awpy, an open-source esports game log parsing library that can extract player trajectories and actions from game logs. Using awpy, we parse 8.6m actions, 7.9m game frames, and 417k trajectories from 1,558 game logs from professional Counter-Strike tournaments to create the Esports Trajectory and Actions (ESTA) dataset. ESTA is one of the largest and most granular publicly available sports data sets to date. We use ESTA to develop benchmarks for win prediction using player-specific information.


Ramirez

AAAI Conferences

Artificial intelligence (AI) techniques have been applied to video games to make the overall experience more enjoyable. In games with interactive storytelling (IS), player actions can substantially affect plot events and plot characters. Therefore, AI planning techniques have been used to shape the plot inresponse to player actions that conflict with authorial goals. While such methods are poised to increase player fun andagency, two recent implementations (ASD and PAST) have not been formally evaluated to date. In this paper we do so via a series of user studies for the first time. We show that ASD significantly enhances fun and agency, whereas PAST gets mixed results with an interaction between effects of the experience manager and player prior gaming experience in one user study, and marginally significant results for increased agency in a study with a constrained story domain.

  player action, ramirez, user study
  Industry: Leisure & Entertainment > Games > Computer Games (0.68)

Exploring the Long Short-Term Dependencies to Infer Shot Influence in Badminton Matches

Wang, Wei-Yao, Chan, Teng-Fong, Yang, Hui-Kuo, Wang, Chih-Chuan, Fan, Yao-Chung, Peng, Wen-Chih

arXiv.org Artificial Intelligence

Identifying significant shots in a rally is important for evaluating players' performance in badminton matches. While there are several studies that have quantified player performance in other sports, analyzing badminton data is remained untouched. In this paper, we introduce a badminton language to fully describe the process of the shot and propose a deep learning model composed of a novel short-term extractor and a long-term encoder for capturing a shot-by-shot sequence in a badminton rally by framing the problem as predicting a rally result. Our model incorporates an attention mechanism to enable the transparency of the action sequence to the rally result, which is essential for badminton experts to gain interpretable predictions. Experimental evaluation based on a real-world dataset demonstrates that our proposed model outperforms the strong baselines. The source code is publicly available at https://github.com/yao0510/Shot-Influence.


Reinforcement Learning Agents for Ubisoft's Roller Champions

Iskander, Nancy, Simoni, Aurelien, Alonso, Eloi, Peter, Maxim

arXiv.org Artificial Intelligence

In recent years, Reinforcement Learning (RL) has seen increasing popularity in research and popular culture. However, skepticism still surrounds the practicality of RL in modern video game development. In this paper, we demonstrate by example that RL can be a great tool for Artificial Intelligence (AI) design in modern, non-trivial video games. We present our RL system for Ubisoft's Roller Champions, a 3v3 Competitive Multiplayer Sports Game played on an oval-shaped skating arena. Our system is designed to keep up with agile, fast-paced development, taking 1--4 days to train a new model following gameplay changes. The AIs are adapted for various game modes, including a 2v2 mode, a Training with Bots mode, in addition to the Classic game mode where they replace players who have disconnected. We observe that the AIs develop sophisticated co-ordinated strategies, and can aid in balancing the game as an added bonus. Please see the accompanying video at https://vimeo.com/466780171 (password: rollerRWRL2020) for examples.


Monte Carlo Tree Search: Implementing Reinforcement Learning in Real-Time Game Player

#artificialintelligence

In this article, to answer these questions, we go through the Monte Carlo Tree Search fundamentals. Since in the next articles, we will implement this algorithm on "HEX" board game, I try to explain the concepts through examples in this board game environment. If you're more interested in the code, find it in this link. There is also a more optimized version which is applicable on linux due to utilizing cython and you can find it in here. Monte Carlo method was coined by Stanislaw Ulam for the first time after applying statistical approach "The Monte Carlo method".


Actions Speak Louder Than Goals: Valuing Player Actions in Soccer

Decroos, Tom, Bransen, Lotte, Van Haaren, Jan, Davis, Jesse

arXiv.org Machine Learning

Assessing the impact of the individual actions performed by soccer players during games is a crucial aspect of the player recruitment process. Unfortunately, most traditional metrics fall short in addressing this task as they either focus on rare events like shots and goals alone or fail to account for the context in which the actions occurred. This paper introduces a novel advanced soccer metric for valuing any type of individual player action on the pitch, be it with or without the ball. Our metric values each player action based on its impact on the game outcome while accounting for the circumstances under which the action happened. When applied to on-the-ball actions like passes, dribbles, and shots alone, our metric identifies Argentine forward Lionel Messi, French teenage star Kylian Mbapp\'e, and Belgian winger Eden Hazard as the most effective players during the 2016/2017 season.