Goto

Collaborating Authors

Uriarte, Alberto


Uriarte

AAAI Conferences

A significant amount of work exists on handling partial observability for different game genres in the context of game tree search. However, most of those techniques do not scale up to RTS games. In this paper we present an experimental evaluation of a recently proposed technique, "single believe state generation," in the context of StarCraft. We evaluate the proposed approach in the context of a StarCraft playing bot and show that the proposed technique is enough to bring the performance of the bot close to the theoretical optimal where the bot can observe the whole game state.


Single Believe State Generation for Handling Partial Observability with MCTS in StarCraft

AAAI Conferences

A significant amount of work exists on handling partial observability for different game genres in the context of game tree search. However, most of those techniques do not scale up to RTS games. In this paper we present an experimental evaluation of a recently proposed technique, "single believe state generation," in the context of StarCraft. We evaluate the proposed approach in the context of a StarCraft playing bot and show that the proposed technique is enough to bring the performance of the bot close to the theoretical optimal where the bot can observe the whole game state.


Improving Monte Carlo Tree Search Policies in StarCraft via Probabilistic Models Learned from Replay Data

AAAI Conferences

Applying game-tree search techniques to RTS games poses a significant challenge, given the large branching factors involved. This paper studies an approach to incorporate knowledge learned offline from game replays to guide the search process. Specifically, we propose to learn Naive Bayesian models predicting the probability of action execution in different game states, and use them to inform the search process of Monte Carlo Tree Search. We evaluate the effect of incorporating these models into several Multiarmed Bandit policies for MCTS in the context of StarCraft, showing a significant improvement in gameplay performance.


Improving Terrain Analysis and Applications to RTS Game AI

AAAI Conferences

This paper presents a new terrain analysis algorithm for RTS games. The proposed algorithms significantly improves the analysis time of the state of the art via contour tracing, and also offers better chokepoint detection. We demonstrate that our approach (BWTA2) is at least 10 times faster than the commonly used BWTA in a collection of StarCraft maps. Additionally, we show the usefulness of terrain analysis in tasks such as pathfinding and discuss potential applications to strategic decision making tasks.


Uriarte

AAAI Conferences

Applying game-tree search techniques to RTS games poses a significant challenge, given the large branching factors involved. This paper studies an approach to incorporate knowledge learned offline from game replays to guide the search process. Specifically, we propose to learn Naive Bayesian models predicting the probability of action execution in different game states, and use them to inform the search process of Monte Carlo Tree Search. We evaluate the effect of incorporating these models into several Multiarmed Bandit policies for MCTS in the context of StarCraft, showing a significant improvement in gameplay performance.


Uriarte

AAAI Conferences

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.


Automatic Learning of Combat Models for RTS Games

AAAI Conferences

Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or "simulator") of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.


Planning in RTS Games with Incomplete Action Definitions via Answer Set Programming

AAAI Conferences

Standard game tree search algorithms, such as minimax or Monte Carlo Tree Search, assume the existence of an accurate forward model that simulates the effects of actions in the game. Creating such model, however, is a challenge in itself.One cause of the complexity of the task is the gap in level of abstraction between the informal specification of the model and its implementation language. To overcome this issue, we propose a technique for the implementation of forward models that relies on the Answer Set Programming paradigm and on well-established knowledge representation techniques from defeasible reasoning and reasoning about actions and change. We evaluate our approach in the context of Real-Time Strategy games using a collection of StarCraft scenarios.


A Benchmark for StarCraft Intelligent Agents

AAAI Conferences

The problem of comparing the performance of different Real-Time Strategy (RTS) Intelligent Agents (IA) is non-trivial. And often different research groups employ different testing methodologies designed to test specific aspects of the agents. However, the lack of a standard process to evaluate and compare different methods in the same context makes progress assessment difficult. In order to address this problem, this paper presents a set of benchmark scenarios and metrics aimed at evaluating the performance of different techniques or agents for the RTS game StarCraft. We used these scenarios to compare the performance of a collection of bots participating in recent StarCraft AI (Artificial Intelligence) competitions to illustrate the usefulness of our proposed benchmarks.


Uriarte

AAAI Conferences

The problem of comparing the performance of different Real-Time Strategy (RTS) Intelligent Agents (IA) is non-trivial. And often different research groups employ different testing methodologies designed to test specific aspects of the agents. However, the lack of a standard process to evaluate and compare different methods in the same context makes progress assessment difficult. In order to address this problem, this paper presents a set of benchmark scenarios and metrics aimed at evaluating the performance of different techniques or agents for the RTS game StarCraft. We used these scenarios to compare the performance of a collection of bots participating in recent StarCraft AI (Artificial Intelligence) competitions to illustrate the usefulness of our proposed benchmarks.