Goto

Collaborating Authors

 Stephenson, Matthew


General Board Geometry

arXiv.org Artificial Intelligence

Game boards are described in the Ludii general game system by their underlying graphs, based on tiling, shape and graph operators, with the automatic detection of important properties such as topological relationships between graph elements, directions and radial step sequences. This approach allows most conceivable game boards to be described simply and succinctly.


Optimised Playout Implementations for the Ludii General Game System

arXiv.org Artificial Intelligence

This paper describes three different optimised implementations of playouts, as commonly used by game-playing algorithms such as Monte-Carlo Tree Search. Each of the optimised implementations is applicable only to specific sets of games, based on their rules. The Ludii general game system can automatically infer, based on a game's description in its general game description language, whether any optimised implementations are applicable. An empirical evaluation demonstrates major speedups over a standard implementation, with a median result of running playouts 5.08 times as fast, over 145 different games in Ludii for which one of the optimised implementations is applicable.


Automatic Generation of Board Game Manuals

arXiv.org Artificial Intelligence

In this paper we present a process for automatically generating manuals for board games within the Ludii general game system. This process requires many different sub-tasks to be addressed, such as English translation of Ludii game descriptions, move visualisation, highlighting winning moves, strategy explanation, among others. These aspects are then combined to create a full manual for any given game. This manual is intended to provide a more intuitive explanation of a game's rules and mechanics, particularly for players who are less familiar with the Ludii game description language and grammar.


General Board Game Concepts

arXiv.org Artificial Intelligence

Many games often share common ideas or aspects between them, such as their rules, controls, or playing area. However, in the context of General Game Playing (GGP) for board games, this area remains under-explored. We propose to formalise the notion of "game concept", inspired by terms generally used by game players and designers. Through the Ludii General Game System, we describe concepts for several levels of abstraction, such as the game itself, the moves played, or the states reached. This new GGP feature associated with the ludeme representation of games opens many new lines of research. The creation of a hyper-agent selector, the transfer of AI learning between games, or explaining AI techniques using game terms, can all be facilitated by the use of game concepts. Other applications which can benefit from game concepts are also discussed, such as the generation of plausible reconstructed rules for incomplete ancient games, or the implementation of a board game recommender system.


Deceptive Level Generation for Angry Birds

arXiv.org Artificial Intelligence

The Angry Birds AI competition has been held over many years to encourage the development of AI agents that can play Angry Birds game levels better than human players. Many different agents with various approaches have been employed over the competition's lifetime to solve this task. Even though the performance of these agents has increased significantly over the past few years, they still show major drawbacks in playing deceptive levels. This is because most of the current agents try to identify the best next shot rather than planning an effective sequence of shots. In order to encourage advancements in such agents, we present an automated methodology to generate deceptive game levels for Angry Birds. Even though there are many existing content generators for Angry Birds, they do not focus on generating deceptive levels. In this paper, we propose a procedure to generate deceptive levels for six deception categories that can fool the state-of-the-art Angry Birds playing AI agents. Our results show that generated deceptive levels exhibit similar characteristics of human-created deceptive levels. Additionally, we define metrics to measure the stability, solvability, and degree of deception of the generated levels.


General Game Heuristic Prediction Based on Ludeme Descriptions

arXiv.org Artificial Intelligence

This paper investigates the performance of different general-game-playing heuristics for games in the Ludii general game system. Based on these results, we train several regression learning models to predict the performance of these heuristics based on each game's description file. We also provide a condensed analysis of the games available in Ludii, and the different ludemes that define them.


Transfer of Fully Convolutional Policy-Value Networks Between Games and Game Variants

arXiv.org Artificial Intelligence

In this paper, we use fully convolutional architectures in AlphaZero-like self-play training setups to facilitate transfer between variants of board games as well as distinct games. We explore how to transfer trained parameters of these architectures based on shared semantics of channels in the state and action representations of the Ludii general game system. We use Ludii's large library of games and game variants for extensive transfer learning evaluations, in zero-shot transfer experiments as well as experiments with additional fine-tuning time.


Manipulating the Distributions of Experience used for Self-Play Learning in Expert Iteration

arXiv.org Artificial Intelligence

Expert Iteration (ExIt) is an effective framework for learning game-playing policies from self-play. ExIt involves training a policy to mimic the search behaviour of a tree search algorithm - such as Monte-Carlo tree search - and using the trained policy to guide it. The policy and the tree search can then iteratively improve each other, through experience gathered in self-play between instances of the guided tree search algorithm. This paper outlines three different approaches for manipulating the distribution of data collected from self-play, and the procedure that samples batches for learning updates from the collected data. Firstly, samples in batches are weighted based on the durations of the episodes in which they were originally experienced. Secondly, Prioritized Experience Replay is applied within the ExIt framework, to prioritise sampling experience from which we expect to obtain valuable training signals. Thirdly, a trained exploratory policy is used to diversify the trajectories experienced in self-play. This paper summarises the effects of these manipulations on training performance evaluated in fourteen different board games. We find major improvements in early training performance in some games, and minor improvements averaged over fourteen games.


Superstition in the Network: Deep Reinforcement Learning Plays Deceptive Games

arXiv.org Artificial Intelligence

Deep reinforcement learning has learned to play many games well, but failed on others. To better characterize the modes and reasons of failure of deep reinforcement learners, we test the widely used Asynchronous Actor-Critic (A2C) algorithm on four deceptive games, which are specially designed to provide challenges to game-playing agents. These games are implemented in the General Video Game AI framework, which allows us to compare the behavior of reinforcement learning-based agents with planning agents based on tree search. We find that several of these games reliably deceive deep reinforcement learners, and that the resulting behavior highlights the shortcomings of the learning algorithm. The particular ways in which agents fail differ from how planning-based agents fail, further illuminating the character of these algorithms. We propose an initial typology of deceptions which could help us better understand pitfalls and failure modes of (deep) reinforcement learning. Introduction In reinforcement learning (RL) (Sutton and Barto 1998) an agent is tasked with learning a policy that maximizes expected reward based only on its interactions with the environment. In general, there is no guarantee that any such procedure will lead to an optimal policy; while convergence proofs exist, they only apply to a tiny and rather uninteresting class of environments. Reinforcement learning still performs well for a wide range of scenarios not covered by those convergence proofs. However, while recent successes in game-playing with deep reinforcement learning (Justesen et al. 2017) have led to a high degree of confidence in the deep RL approach, there are still scenarios or games where deep RL fails. Some oft-mentioned reasons why RL algorithms fail are partial observability and long time spans between actions and rewards. But are there other causes?


Ludii and XCSP: Playing and Solving Logic Puzzles

arXiv.org Artificial Intelligence

Many of the famous single-player games, commonly called puzzles, can be shown to be NP-Complete. Indeed, this class of complexity contains hundreds of puzzles, since people particularly appreciate completing an intractable puzzle, such as Sudoku, but also enjoy the ability to check their solution easily once it's done. For this reason, using constraint programming is naturally suited to solve them. In this paper, we focus on logic puzzles described in the Ludii general game system and we propose using the XCSP formalism in order to solve them with any CSP solver.