Goto

Collaborating Authors

Results


AI and Wargaming

arXiv.org Artificial Intelligence

Recent progress in Game AI has demonstrated that given enough data from human gameplay, or experience gained via simulations, machines can rival or surpass the most skilled human players in classic games such as Go, or commercial computer games such as Starcraft. We review the current state-of-the-art through the lens of wargaming, and ask firstly what features of wargames distinguish them from the usual AI testbeds, and secondly which recent AI advances are best suited to address these wargame-specific features.


Predicting Game Difficulty and Churn Without Players

arXiv.org Artificial Intelligence

We propose a novel simulation model that is able to predict the per-level churn and pass rates of Angry Birds Dream Blast, a popular mobile free-to-play game. Our primary contribution is to combine AI gameplay using Deep Reinforcement Learning (DRL) with a simulation of how the player population evolves over the levels. The AI players predict level difficulty, which is used to drive a player population model with simulated skill, persistence, and boredom. This allows us to model, e.g., how less persistent and skilled players are more sensitive to high difficulty, and how such players churn early, which makes the player population and the relation between difficulty and churn evolve level by level. Our work demonstrates that player behavior predictions produced by DRL gameplay can be significantly improved by even a very simple population-level simulation of individual player differences, without requiring costly retraining of agents or collecting new DRL gameplay data for each simulated player.


Multi-Agent Reinforcement Learning with Graph Clustering

arXiv.org Artificial Intelligence

In this paper, we introduce the group concept into multi-agent reinforcement learning. In this method, agents are divided into several groups and each group completes a specific subtask so that agents can cooperate to complete the main task. Existing methods use the communication vector to exchange information between agents. This may encounter communication redundancy. To solve this problem, we propose a MARL method based on graph clustering. It allows agents to adaptively learn group features and replaces the communication operation. In our method, agent features are divide into two types, including in-group features and individual features. They represent the generality and differences between agents, respectively. Based on the graph attention network(GAT), we introduce the graph clustering method as a punishment to optimize agent group feature. Then these features are used to generate individual Q value. To overcome the consistent problem brought by GAT, we introduce the split loss to distinguish agent features. Our method is easy to convert into the CTDE framework via using Kullback-Leibler divergence method. Empirical results are evaluated on a challenging set of StarCraft II micromanagement tasks. The result shows that our method outperforms existing multi-agent reinforcement learning methods and the performance increases with the number of agents increasing.


PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning

arXiv.org Artificial Intelligence

Direct policy gradient methods for reinforcement learning are a successful approach for a variety of reasons: they are model free, they directly optimize the performance metric of interest, and they allow for richly parameterized policies. Their primary drawback is that, by being local in nature, they fail to adequately explore the environment. In contrast, while model-based approaches and Q-learning directly handle exploration through the use of optimism, their ability to handle model misspecification and function approximation is far less evident. This work introduces the the Policy Cover-Policy Gradient (PC-PG) algorithm, which provably balances the exploration vs. exploitation tradeoff using an ensemble of learned policies (the policy cover). PC-PG enjoys polynomial sample complexity and run time for both tabular MDPs and, more generally, linear MDPs in an infinite dimensional RKHS. Furthermore, PC-PG also has strong guarantees under model misspecification that go beyond the standard worst case $\ell_{\infty}$ assumptions; this includes approximation guarantees for state aggregation under an average case error assumption, along with guarantees under a more general assumption where the approximation error under distribution shift is controlled. We complement the theory with empirical evaluation across a variety of domains in both reward-free and reward-driven settings.


HEX and Neurodynamic Programming

arXiv.org Artificial Intelligence

Hex is a complex game with a high branching factor. For the first time Hex is being attempted to be solved without the use of game tree structures and associated methods of pruning. We also are abstaining from any heuristic information about Virtual Connections or Semi Virtual Connections which were previously used in all previous known computer versions of the game. The H-search algorithm which was the basis of finding such connections and had been used with success in previous Hex playing agents has been forgone. Instead what we use is reinforcement learning through self play and approximations through neural networks to by pass the problem of high branching factor and maintaining large tables for state-action evaluations. Our code is based primarily on NeuroHex. The inspiration is drawn from the recent success of AlphaGo Zero.


Modeling and Prediction of Human Driver Behavior: A Survey

arXiv.org Artificial Intelligence

We present a review and taxonomy of 200 models from the literature on driver behavior modeling. We begin by introducing a mathematical formulation based on the partially observable stochastic game, which serves as a common framework for comparing and contrasting different driver models. Our taxonomy is constructed around the core modeling tasks of state estimation, intention estimation, trait estimation, and motion prediction, and also discusses the auxiliary tasks of risk estimation, anomaly detection, behavior imitation and microscopic traffic simulation. Existing driver models are categorized based on the specific tasks they address and key attributes of their approach.


Learning to Play Two-Player Perfect-Information Games without Knowledge

arXiv.org Artificial Intelligence

In this paper, several techniques for learning game state evaluation functions by reinforcement are proposed. The first is a generalization of tree bootstrapping (tree learning): it is adapted to the context of reinforcement learning without knowledge based on non-linear functions. With this technique, no information is lost during the reinforcement learning process. The second is a modification of minimax with unbounded depth extending the best sequences of actions to the terminal states. This modified search is intended to be used during the learning process. The third is to replace the classic gain of a game (+1 / -1) with a reinforcement heuristic. We study particular reinforcement heuristics such as: quick wins and slow defeats ; scoring ; mobility or presence. The four is another variant of unbounded minimax, which plays the safest action instead of playing the best action. This modified search is intended to be used after the learning process. The five is a new action selection distribution. The conducted experiments suggest that these techniques improve the level of play. Finally, we apply these different techniques to design program-players to the game of Hex (size 11 and 13) surpassing the level of Mohex 2.0 with reinforcement learning from self-play without knowledge. At Hex size 11 (without swap), the program-player reaches the level of Mohex 3HNN.


Formal Fields: A Framework to Automate Code Generation Across Domains

arXiv.org Artificial Intelligence

Code generation, defined as automatically writing a piece of code to solve a given problem for which an evaluation function exists, is a classic hard AI problem. Its general form, writing code using a general language used by human programmers from scratch is thought to be impractical. Adding constraints to the code grammar, implementing domain specific concepts as primitives and providing examples for the algorithm to learn, makes it practical. Formal fields is a framework to do code generation across domains using the same algorithms and language structure. Its ultimate goal is not just solving different narrow problems, but providing necessary abstractions to integrate many working solutions as a single lifelong reasoning system. It provides a common grammar to define: a domain language, a problem and its evaluation. The framework learns from examples of code snippets about the structure of the domain language and searches completely new code snippets to solve unseen problems in the same field. Formal fields abstract the search algorithm away from the problem. The search algorithm is taken from existing reinforcement learning algorithms. In our implementation it is an apropos Monte-Carlo Tree Search (MCTS). We have implemented formal fields as a fully documented open source project applied to the Abstract Reasoning Challenge (ARC). The implementation found code snippets solving twenty two previously unsolved ARC problems.


Lifelong Incremental Reinforcement Learning with Online Bayesian Inference

arXiv.org Artificial Intelligence

A central capability of a long-lived reinforcement learning (RL) agent is to incrementally adapt its behavior as its environment changes, and to incrementally build upon previous experiences to facilitate future learning in real-world scenarios. In this paper, we propose LifeLong Incremental Reinforcement Learning (LLIRL), a new incremental algorithm for efficient lifelong adaptation to dynamic environments. We develop and maintain a library that contains an infinite mixture of parameterized environment models, which is equivalent to clustering environment parameters in a latent space. The prior distribution over the mixture is formulated as a Chinese restaurant process (CRP), which incrementally instantiates new environment models without any external information to signal environmental changes in advance. During lifelong learning, we employ the expectation maximization (EM) algorithm with online Bayesian inference to update the mixture in a fully incremental manner. In EM, the E-step involves estimating the posterior expectation of environment-to-cluster assignments, while the M-step updates the environment parameters for future learning. This method allows for all environment models to be adapted as necessary, with new models instantiated for environmental changes and old models retrieved when previously seen environments are encountered again. Experiments demonstrate that LLIRL outperforms relevant existing methods, and enables effective incremental adaptation to various dynamic environments for lifelong learning.


A Review on Computational Intelligence Techniques in Cloud and Edge Computing

arXiv.org Artificial Intelligence

Cloud computing (CC) is a centralized computing paradigm that accumulates resources centrally and provides these resources to users through Internet. Although CC holds a large number of resources, it may not be acceptable by real-time mobile applications, as it is usually far away from users geographically. On the other hand, edge computing (EC), which distributes resources to the network edge, enjoys increasing popularity in the applications with low-latency and high-reliability requirements. EC provides resources in a decentralized manner, which can respond to users' requirements faster than the normal CC, but with limited computing capacities. As both CC and EC are resource-sensitive, several big issues arise, such as how to conduct job scheduling, resource allocation, and task offloading, which significantly influence the performance of the whole system. To tackle these issues, many optimization problems have been formulated. These optimization problems usually have complex properties, such as non-convexity and NP-hardness, which may not be addressed by the traditional convex optimization-based solutions. Computational intelligence (CI), consisting of a set of nature-inspired computational approaches, recently exhibits great potential in addressing these optimization problems in CC and EC. This paper provides an overview of research problems in CC and EC and recent progresses in addressing them with the help of CI techniques. Informative discussions and future research trends are also presented, with the aim of offering insights to the readers and motivating new research directions.