Not enough data to create a plot.
Try a different view from the menu above.
Kadlec, Rudolf
Player of Games
Schmid, Martin, Moravcik, Matej, Burch, Neil, Kadlec, Rudolf, Davidson, Josh, Waugh, Kevin, Bard, Nolan, Timbers, Finbarr, Lanctot, Marc, Holland, Zach, Davoodi, Elnaz, Christianson, Alden, Bowling, Michael
Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce Player of Games, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play learning, and game-theoretic reasoning. Player of Games is the first algorithm to achieve strong empirical performance in large perfect and imperfect information games -- an important step towards truly general algorithms for arbitrary environments. We prove that Player of Games is sound, converging to perfect play as available computation time and approximation capacity increases. Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold'em poker (Slumbot), and defeats the state-of-the-art agent in Scotland Yard, an imperfect information game that illustrates the value of guided search, learning, and game-theoretic reasoning.
Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games using Baselines
Schmid, Martin, Burch, Neil, Lanctot, Marc, Moravcik, Matej, Kadlec, Rudolf, Bowling, Michael
Learning strategies for imperfect information games from samples of interaction is a challenging problem. A common method for this setting, Monte Carlo Counterfactual Regret Minimization (MCCFR), can have slow long-term convergence rates due to high variance. In this paper, we introduce a variance reduction technique (VR-MCCFR) that applies to any sampling variant of MCCFR. Using this technique, per-iteration estimated values and updates are reformulated as a function of sampled values and state-action baselines, similar to their use in policy gradient reinforcement learning. The new formulation allows estimates to be bootstrapped from other estimates within the same episode, propagating the benefits of baselines along the sampled trajectory; the estimates remain unbiased even when bootstrapping from other estimates. Finally, we show that given a perfect baseline, the variance of the value estimates can be reduced to zero. Experimental evaluation shows that VR-MCCFR brings an order of magnitude speedup, while the empirical variance decreases by three orders of magnitude. The decreased variance allows for the first time CFR+ to be used with sampling, increasing the speedup to two orders of magnitude.
A Boo(n) for Evaluating Architecture Performance
Bajgar, Ondrej, Kadlec, Rudolf, Kleindienst, Jan
We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-$n$ performance ($\text{Boo}_n$) as a way to correct these problems.
Planning Is the Game: Action Planning as a Design Tool and Game Mechanism
Kadlec, Rudolf (Charles University in Prague) | Toth, Csaba (Charles University in Prague) | Cerny, Martin (Charles University in Prague) | Bartak, Roman (Charles University in Prague) | Brom, Cyril (Charles University in Prague)
Recent development in game AI has seen action planning and its derivates being adapted for controlling agents in classical types of games, such as FPSs or RPGs. Complementary, one can seek new types of gameplay elements inspired by planning. We propose and formally define a new game "genre" called anticipation games and demonstrate that planning can be used as their key concept both at design time and run time. In an anticipation game, a human player observes a computer controlled agent or agents, tries to predict their actions and indirectly helps them to achieve their goal. The paper describes an example prototype of an anticipation game we developed. The player helps a burglar steal an artifact from a museum guarded by guard agents. The burglar has incomplete knowledge of the environment and his plan will contain pitfalls. The player has to identify these pitfalls by observing burglar's behavior and change the environment so that the burglar replans and avoids the pitfalls. The game prototype is evaluated in a small-scale human-subject study, which suggests that the anticipation game concept is promising.