Goto

Collaborating Authors

 playstyle


Perceptual Similarity for Measuring Decision-Making Style and Policy Diversity in Games

Lin, Chiu-Chou, Chiu, Wei-Chen, Wu, I-Chen

arXiv.org Artificial Intelligence

Defining and measuring decision-making styles, also known as playstyles, is crucial in gaming, where these styles reflect a broad spectrum of individuality and diversity. However, finding a universally applicable measure for these styles poses a challenge. Building on Playstyle Distance, the first unsupervised metric to measure playstyle similarity based on game screens and raw actions by identifying comparable states with discrete representations for computing policy distance, we introduce three enhancements to increase accuracy: multiscale analysis with varied state granularity, a perceptual kernel rooted in psychology, and the utilization of the intersection-over-union method for efficient evaluation. These innovations not only advance measurement precision but also offer insights into human cognition of similarity. Across two racing games and seven Atari games, our techniques significantly improve the precision of zero-shot playstyle classification, achieving an accuracy exceeding 90% with fewer than 512 observation-action pairs--less than half an episode of these games. Furthermore, our experiments with 2048 and Go demonstrate the potential of discrete playstyle measures in puzzle and board games. We also develop an algorithm for assessing decision-making diversity using these measures. Our findings improve the measurement of end-to-end game analysis and the evolution of artificial intelligence for diverse playstyles.


Generating Personas for Games with Multimodal Adversarial Imitation Learning

Ahlberg, William, Sestini, Alessandro, Tollmar, Konrad, Gisslén, Linus

arXiv.org Artificial Intelligence

Reinforcement learning has been widely successful in producing agents capable of playing games at a human level. However, this requires complex reward engineering, and the agent's resulting policy is often unpredictable. Going beyond reinforcement learning is necessary to model a wide range of human playstyles, which can be difficult to represent with a reward function. This paper presents a novel imitation learning approach to generate multiple persona policies for playtesting. Multimodal Generative Adversarial Imitation Learning (MultiGAIL) uses an auxiliary input parameter to learn distinct personas using a single-agent model. MultiGAIL is based on generative adversarial imitation learning and uses multiple discriminators as reward models, inferring the environment reward by comparing the agent and distinct expert policies. The reward from each discriminator is weighted according to the auxiliary input. Our experimental analysis demonstrates the effectiveness of our technique in two environments with continuous and discrete action spaces.


Understanding why shooters shoot -- An AI-powered engine for basketball performance profiling

Pascual, Alejandro Rodriguez, Mehta, Ishan, Khan, Muhammad, Rodriz, Frank, Yu, Rose

arXiv.org Artificial Intelligence

Understanding player shooting profiles is an essential part of basketball analysis: knowing where certain opposing players like to shoot from can help coaches neutralize offensive gameplans from their opponents; understanding where their players are most comfortable can lead them to developing more effective offensive strategies. An automatic tool that can provide these performance profiles in a timely manner can become invaluable for coaches to maximize both the effectiveness of their game plan as well as the time dedicated to practice and other related activities. Additionally, basketball is dictated by many variables, such as playstyle and game dynamics, that can change the flow of the game and, by extension, player performance profiles. It is crucial that the performance profiles can reflect the diverse playstyles, as well as the fast-changing dynamics of the game. We present a tool that can visualize player performance profiles in a timely manner while taking into account factors such as play-style and game dynamics. Our approach generates interpretable heatmaps that allow us to identify and analyze how non-spatial factors, such as game dynamics or playstyle, affect player performance profiles.


Configurable Agent With Reward As Input: A Play-Style Continuum Generation

de Woillemont, Pierre Le Pelletier, Labory, Rémi, Corruble, Vincent

arXiv.org Artificial Intelligence

Modern video games are becoming richer and more complex in terms of game mechanics. This complexity allows for the emergence of a wide variety of ways to play the game across the players. From the point of view of the game designer, this means that one needs to anticipate a lot of different ways the game could be played. Machine Learning (ML) could help address this issue. More precisely, Reinforcement Learning is a promising answer to the need of automating video game testing. In this paper we present a video game environment which lets us define multiple play-styles. We then introduce CARI: a Configurable Agent with Reward as Input. An agent able to simulate a wide continuum range of play-styles. It is not constrained to extreme archetypal behaviors like current methods using reward shaping. In addition it achieves this through a single training loop, instead of the usual one loop per play-style. We compare this novel training approach with the more classic reward shaping approach and conclude that CARI can also outperform the baseline on archetypes generation. This novel agent could be used to investigate behaviors and balancing during the production of a video game with a realistic amount of training time.


How A.I. will bring us closer to an Ideal Player Journey -- Blog

#artificialintelligence

A captivating experience is what we are looking for when we read books, watch movies, or play games. What separates the games from other media is interactivity. In other words, the ability to take an active part in the experience. The freedom to shape your own Journey is a dream worth chasing. Are we close to this dream already?


Devlin

AAAI Conferences

Monte Carlo Tree Search (MCTS) has become a popular solution for controlling non-player characters. Its use has repeatedly been shown to be capable of creating strong game playing opponents. However, the emergent playstyle of agents using MCTS is not necessarily human-like, believable or enjoyable. AI Factory Spades, currently the top rated Spades game in the Google Play store, uses a variant of MCTS to control non-player characters. In collaboration with the developers, we collected gameplay data from 27,592 games and showed in a previous study that the playstyle of human players significantly differed from that of the non-player characters. This paper presents a method of biasing MCTS using human gameplay data to create Spades playing agents that emulate human play whilst maintaining a strong, competitive performance. The methods of player modelling and biasing MCTS presented in this study are generally applicable to digital games with discrete actions.


Mimicking Playstyle by Adapting Parameterized Behavior Trees in RTS Games

Kozik, Andrzej, Machalewski, Tomasz, Marek, Mariusz, Ochmann, Adrian

arXiv.org Artificial Intelligence

The discovery of Behavior Trees (BTs) impacted the field of Artificial Intelligence (AI) in games, by providing flexible and natural representation of non-player characters (NPCs) logic, manageable by game-designers. Nevertheless, increased pressure on ever better NPCs AI-agents forced complexity of handcrafted BTs to became barely-tractable and error-prone. On the other hand, while many just-launched on-line games suffer from player-shortage, the existence of AI with a broad-range of capabilities could increase players retention. Therefore, to handle above challenges, recent trends in the field focused on automatic creation of AI-agents: from deep- and reinforcementlearning techniques to combinatorial (constrained) optimization and evolution of BTs. In this paper, we present a novel approach to semi-automatic construction of AI-agents, that mimic and generalize given human gameplays by adapting and tuning of expert-created BT under a developed similarity metric between source and BT gameplays. To this end, we formulated mixed discrete-continuous optimization problem, in which topological and functional changes of the BT are reflected in numerical variables, and constructed a dedicated hybrid-metaheuristic. The performance of presented approach was verified experimentally in a prototype real-time strategy game. Carried out experiments confirmed efficiency and perspectives of presented approach, which is going to be applied in a commercial game.


An Unsupervised Video Game Playstyle Metric via State Discretization

Lin, Chiu-Chou, Chiu, Wei-Chen, Wu, I-Chen

arXiv.org Artificial Intelligence

On playing video games, different players usually have their own playstyles. Recently, there have been great improvements for the video game AIs on the playing strength. However, past researches for analyzing the behaviors of players still used heuristic rules or the behavior features with the game-environment support, thus being exhausted for the developers to define the features of discriminating various playstyles. In this paper, we propose the first metric for video game playstyles directly from the game observations and actions, without any prior specification on the playstyle in the target game. Our proposed method is built upon a novel scheme of learning discrete representations that can map game observations into latent discrete states, such that playstyles can be exhibited from these discrete states. Namely, we measure the playstyle distance based on game observations aligned to the same states. We demonstrate high playstyle accuracy of our metric in experiments on some video game platforms, including TORCS, RGSK, and seven Atari games, and for different agents including rule-based AI bots, learning-based AI bots, and human players.


Afros in Azeroth: the quest for diversity in World of Warcraft

The Guardian

Recently, I've spent quite a lot of time pondering what an orc would look like with an afro. This, naturally, led to contemplation of an axe-afro-comb combo, and whether such a contraption would fall under blacksmithing or engineering. That's because I've been playing Shadowlands, the eighth expansion to World of Warcraft. For Warcraft fans, there's a lot to be excited about: the new game allows players to explore the afterlife – reviving classic characters such as Kael'thas Sunstrider – and introduces a new style of play in Torghast, a deliciously punishing dungeon that changes each time you visit. There's also a clear recruiting drive for new players with a simplified introduction, more straightforward questing and reconfigured character growth, all aimed at making this venerable and complex game less daunting.


Cat Bored at Home While You're at Work? There's an Adorable, Little Robot for That

#artificialintelligence

The Ebo also has different playstyles (like hyper and lazy) in order to appeal to cats of various ages and temperaments. If you aren't sure what your cat's playstyle is, no problem. This robot is able to use its built-in AI "to learn with your cat for creative exercise routines," according to Enabot, and once playtime is over, the Ebo will return to its charging station on its own.