ontañón
Ontanon
Real-time strategy (RTS) games are hard from an AI point of view because they have enormous state spaces, combinatorial branching factors, allow simultaneous and durative actions, and players have very little time to choose actions. For these reasons, standard game tree search methods such as alpha- beta search or Monte Carlo Tree Search (MCTS) are not sufficient by themselves to handle these games. This paper presents an alternative approach called Adversarial Hierarchical Task Network (AHTN) planning that combines ideas from game tree search with HTN planning. We present the basic algorithm, relate it to existing adversarial hierarchical planning methods, and present new extensions for simultaneous and durative actions to handle RTS games.
Ontanon
Analogy-based Story Generation (ASG) is a relatively under-explored approach for story generation and computational narrative. In this paper, we present the SAM (Story Analogies through Mapping) algorithm as our attempt to expand the scope and complexity of stories generated by ASG. Comparing with existing work and our prior work, there are two main contributions of SAM: it employs 1) analogical reasoning both at the specific story content and general domain knowledge levels, and 2) temporal reasoning about the story (phase) structure in order to generate more complex stories. We illustrate SAM through a few example stories.
Ontanon
Game tree search in games with large branching factors is a notoriously hard problem. In this paper, we address this problem with a new sampling strategy for Monte Carlo Tree Search (MCTS) algorithms, called "Naive Sampling", based on a variant of the Multi-armed Bandit problem called the "Combinatorial Multi-armed Bandit" (CMAB) problem. We present a new MCTS algorithm based on Naive Sampling called NaiveMCTS, and evaluate it in the context of real-time strategy (RTS) games. Our results show that as the branching factor grows, NaiveMCTS performs significantly better than other algorithms.
Ontanon
Recent work has shown that incorporating action probability models (models that given a game state can predict the probability with which an expert will play each move) into MCTS can lead to significant performance improvements in a variety of adversarial games, including RTS games. This paper presents a collection of experiments aimed at understanding the relation between the amount of training data, the predictive performance of the action models, the effect of these models in the branching factor of the game and the resulting performance gains in MCTS. Experiments are carried out in the context of the microRTS simulator, showing that more accurate predictive models do not necessarily result in better MCTS performance.