Goto

Collaborating Authors

Learning Continuous Action Models in a Real-Time Strategy Environment

AAAI Conferences

Although several researchers have integrated methods for reinforcement learning (RL) with case-based reasoning (CBR) to model continuous action spaces, existing integrations typically employ discrete approximations of these models. This limits the set of actions that can be modeled, and may lead to non-optimal solutions. We introduce the Continuous Action and State Space Learner (CASSL), an integrated RL/CBR algorithm that uses continuous models directly. Our empirical study shows that CASSL significantly outperforms two baseline approaches for selecting actions on a task from a real-time strategy gaming environment.


A Real-Time Opponent Modeling System for Rush Football

AAAI Conferences

One drawback with using plan recognition in adversarial games is that often players must commit to a plan before it is possible to infer the opponent's intentions. In such cases, it is valuable to couple plan recognition with plan repair, particularly in multi-agent domains where complete replanning is not computationally feasible. This paper presents a method for learning plan repair policies in real-time using Upper Confidence Bounds applied to Trees (UCT). We demonstrate how these policies can be coupled with plan recognition in an American football game (Rush 2008) to create an autonomous offensive team capable of responding to unexpected changes in defensive strategy. Our real-time version of UCT learns play modifications that result in a significantly higher average yardage and fewer interceptions than either the baseline game or domain-specific heuristics. Although it is possible to use the actual game simulator to measure reward offline, to execute UCT in real-time demands a different approach; here we describe two modules for reusing data from offline UCT searches to learn accurate state and reward estimators.


The Case for Case-Based Transfer Learning

AI Magazine

Case-based reasoning (CBR) is a problem-solving process in which a new problem is solved by retrieving a similar situation and reusing its solution. Transfer learning occurs when, after gaining experience from learning how to solve source problems, the same learner exploits this experience to improve performance and/or learning on target problems. In transfer learning, the differences between the source and target problems characterize the transfer distance. CBR can support transfer learning methods in multiple ways. We illustrate how CBR and transfer learning interact and characterize three approaches for using CBR in transfer learning: (1) as a transfer learning method, (2) for problem learning, and (3) to transfer knowledge between sets of problems. We describe examples of these approaches from our own and related work and discuss applicable transfer distances for each. We close with conclusions and directions for future research applying CBR to transfer learning.


A Review of Real-Time Strategy Game AI

AI Magazine

It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies. AI has notably been applied to board games, such as chess, Scrabble, and backgammon, creating competition that has sped the development of many heuristicbased search techniques (Schaeffer 2001).


A Review of Real-Time Strategy Game AI

AI Magazine

This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision-making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academia and industry, finding the academic research heavily focused on creating game-winning agents, while the indus- try aims to maximise player enjoyment. It finds the industry adoption of academic research is low because it is either in- applicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investi- gation: bridging the gap between academia and industry. Fi- nally, the areas of spatial reasoning, multi-scale AI, and co- operation are found to require future work, and standardised evaluation methods are proposed to produce comparable re- sults between studies.