Goto

Collaborating Authors

bosansky


Double-oracle sampling method for Stackelberg Equilibrium approximation in general-sum extensive-form games

arXiv.org Artificial Intelligence

The paper presents a new method for approximating Strong Stackelberg Equilibrium in general-sum sequential games with imperfect information and perfect recall. The proposed approach is generic as it does not rely on any specific properties of a particular game model. The method is based on iterative interleaving of the two following phases: (1) guided Monte Carlo Tree Search sampling of the Follower's strategy space and (2) building the Leader's behavior strategy tree for which the sampled Follower's strategy is an optimal response. The above solution scheme is evaluated with respect to expected Leader's utility and time requirements on three sets of interception games with variable characteristics, played on graphs. A comparison with three state-of-the-art MILP/LP-based methods shows that in vast majority of test cases proposed simulation-based approach leads to optimal Leader's strategies, while excelling the competitive methods in terms of better time scalability and lower memory requirements.


Combining Incremental Strategy Generation and Branch and Bound Search for Computing Maxmin Strategies in Imperfect Recall Games

AAAI Conferences

Extensive-form games with imperfect recall are an important model of dynamic games where the players forget previously known information. Often, imperfect recall games are the result of an abstraction algorithm that simplifies a large game with perfect recall. Unfortunately, solving an imperfect recall game has fundamental problems since a Nash equilibrium does not have to exist. Alternatively, we can seek maxmin strategies that guarantee an expected outcome. The only existing algorithm computing maxmin strategies in imperfect recall games, however, requires approximating a bilinear program that is proportional to the size of the game and thus has a limited scalability. We propose a novel algorithm for computing maxmin strategies that combines this approximate algorithm with an incremental strategy-generation technique designed previously for extensive-form games with perfect recall. Experimental evaluation shows that the novel algorithm builds only a fraction of the game tree and improves the scalability by several orders of magnitude. Finally, we demonstrate that our algorithm can solve an abstracted variant of a large game faster compared to the algorithms operating on the unabstracted perfect-recall variant.


Bosansky

AAAI Conferences

We focus on solving two-player zero-sum extensive-form games with perfect information and simultaneous moves. In these games, both players fully observe the current state of the game where they simultaneously make a move determining the next state of the game. We solve these games by a novel algorithm that relies on two components: (1) it iteratively solves the games that correspond to a single simultaneous move using a double-oracle method, and (2) it prunes the states of the game using bounds on the sub-game values obtained by the classical Alpha-Beta search on a serialized variant of the game. We experimentally evaluate our algorithm on the Goofspiel card game, a pursuit-evasion game, and randomly generated games. The results show that our novel algorithm typically provides significant running-time improvements and reduction in the number of evaluated nodes compared to the full search algorithm.


Bosansky

AAAI Conferences

This paper considers the problem of patrolling multiple targets in a Euclidean environment by a single patrolling unit. We use game-theoretic approach and model the problem as a two-player zero-sum game in the extensive form. Based on the existing work in the domain of patrolling we propose a novel mathematical non-linear program for finding strategies in a discretized problem, in which we introduce a general concept of internal states of the patroller. We experimentally evaluate game value for the patroller for various graphs and strategy representations. The results suggest that adding internal states for the patroller yields better results in comparison to adding choice nodes in the used discretization.