Goto

Collaborating Authors

Results


Heinrich

AAAI Conferences

Self-play Monte Carlo Tree Search (MCTS) has been successful in many perfect-information two-player games. Although these methods have been extended to imperfect-information games, so far they have not achieved the same level of practical success or theoretical convergence guarantees as competing methods. In this paper we introduce Smooth UCT, a variant of the established Upper Confidence Bounds Applied to Trees (UCT) algorithm. Smooth UCT agents mix in their average policy during self-play and the resulting planning process resembles game-theoretic fictitious play. When applied to Kuhn and Leduc poker, Smooth UCT approached a Nash equilibrium, whereas UCT diverged. In addition, Smooth UCT outperformed UCT in Limit Texas Hold'em and won 3 silver medals in the 2014 Annual Computer Poker Competition.


Fictitious Play Outperforms Counterfactual Regret Minimization

arXiv.org Artificial Intelligence

In two-player zero-sum games a Nash equilibrium strategy is guaranteed to win (or tie) in expectation against any opposing strategy by the minimax theorem. In games with m ore than two players there can be multiple equilibria with different values to the players, and follow ing one has no performance guarantee; however, it was shown that a Nash equilibrium strategy defeated a variet y of agents submitted for a class project in a 3-player imperfect-information game, Kuhn poker [13]. Thi s demonstrates that Nash equilibrium strategies can be successful in practice despite the fact that they do no t have a performance guarantee. While Nash equilibrium can be computed in polynomial time fo r two-player zero-sum games, it is PPAD-hard to compute for nonzero-sum and games with 3 or mor e agents and widely believed that no efficient algorithms exist [8, 9]. Counterfactual regret mi nimization (CFR) is an iterative self-play procedure that has been proven to converge to Nash equilibrium in two-p layer zero-sum [28].


Value Functions for Depth-Limited Solving in Zero-Sum Imperfect-Information Games

arXiv.org Artificial Intelligence

Depth-limited look-ahead search is an essential tool for agents playing perfect-information games. In imperfect information games, the lack of a clear notion of a value of a state makes designing theoretically sound depth-limited solving algorithms substantially more difficult. Furthermore, most results in this direction only consider the domain of poker. We consider two-player zero-sum extensive form games in general. We provide a domain-independent definitions of optimal value functions and prove that they can be used for depth-limited look-ahead game solving. We prove that the minimal set of game states necessary to define the value functions is related to common knowledge of the players. We show the value function may be defined in several structurally different ways. None of them is unique, but the set of possible outputs is convex, which enables approximating the value function by machine learning models.


Depth-Limited Solving for Imperfect-Information Games

Neural Information Processing Systems

A fundamental challenge in imperfect-information games is that states do not have well-defined values. As a result, depth-limited search algorithms used in single-agent settings and perfect-information games do not apply. This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit. Each one of these strategies results in a different set of values for leaf nodes. This forces an agent to be robust to the different strategies an opponent may employ. We demonstrate the effectiveness of this approach by building a master-level heads-up no-limit Texas hold'em poker AI that defeats two prior top agents using only a 4-core CPU and 16 GB of memory. Developing such a powerful agent would have previously required a supercomputer.


AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games

AAAI Conferences

Evaluating agent performance when outcomes are stochastic and agents use randomized strategies can be challenging when there is limited data available. The variance of sampled outcomes may make the simple approach of Monte Carlo sampling inadequate. This is the case for agents playing heads-up no-limit Texas hold'em poker, whereman-machine competitions typically involve multiple days of consistent play by multiple players, but still can (and sometimes did) result in statistically insignificant conclusions. In this paper, we introduce AIVAT, a low variance, provably unbiased value assessment tool that exploits an arbitrary heuristic estimate of state value, as well as the explicit strategy of a subset of the agents. Unlike existing techniques which reduce the variance from chance events, or only consider game ending actions, AIVAT reduces the variance both from choices by nature and by players with a known strategy. The resulting estimator produces results that significantly outperform previous state of the art techniques. It was able to reduce the standard deviation of a Texas hold'em poker man-machine match by 85\% and consequently requires 44 times fewer games to draw the same statistical conclusion. AIVAT enabled the first statistically significant AI victory against professional poker players in no-limit hold'em.Furthermore, the technique was powerful enough to produce statistically significant results versus individual players, not just an aggregate pool of the players. We also used AIVAT to analyze a short series of AI vs human poker tournaments,producing statistical significant results with as few as 28 matches.


Heads-Up Limit Hold'em Poker Is Solved

Communications of the ACM

Mirowski cites Turing as author of the paragraph containing this remark. The paragraph appeared in [46], in a chapter with Turing listed as one of three contributors. Which parts of the chapter are the work of which contributor, particularly the introductory material containing this quote, is not made explicit.


Bot makes poker pros fold: What's next for AI?

#artificialintelligence

Carnegie Mellon's No-Limit Texas Hold'em software made short work of four of the world's best professional poker players in Pittsburgh at the grueling "Brains vs. Artificial Intelligence" poker tournament. Poker now joins chess, Jeopardy, go, and many other games at which programs outplay people. But poker is different from all the others in one big way: players have to guess based on partial, or "imperfect" information. "Chess and Go are games of perfect information," explains Libratus co-creator Noam Brown, a Ph.D. candidate at Carnegie Mellon. "All the information in the game is available for both sides to see.


The poker machine

AITopics Original Links

The World Series of Poker in Las Vegas in 2000 attracted a record 500 players. Over four days, contestants were gradually eliminated until just two men were left to face off in poker's flagship game, Texas Hold'Em. The more experienced player was a living legend named T.J. Cloutier, a 62-year-old Texan road gambler who was regarded by many as the best in the world. His opponent was a 37-year-old computer scientist from California named Chris Ferguson who had only been playing World Series games since 1996, never finishing higher than fourth place. Ferguson might have been a relative newcomer, but he was hard to miss. He had earned the nickname "Jesus" because he hid his face behind a long beard and hair that cascaded over his shoulders, buttressed by wraparound mirror shades and a big cowboy hat. Ferguson never spoke during a game, determined not to show any sign of human emotion; he didn't pay much attention to other players' nervous tics either, preferring to draw all his information from the cards. In Las Vegas that week he had destroyed the field and came to the table with 10 times as many chips as his opponent. More…Cloutier, a former football pro with huge shoulders, paws that dwarfed his cards, and a dominant presence at the table, had seen it all before.


Refining Subgames in Large Imperfect Information Games

AAAI Conferences

The leading approach to solving large imperfect information games is to pre-calculate an approximate solution using a simplified abstraction of the full game; that solution is then used to play the original, full-scale game. The abstraction step is necessitated by the size of the game tree. However, as the original game progresses, the remaining portion of the tree (the subgame) becomes smaller. An appealing idea is to use the simplified abstraction to play the early parts of the game and then, once the subgame becomes tractable, to calculate a solution using a finer-grained abstraction in real time, creating a combined final strategy. While this approach is straightforward for perfect information games, it is a much more complex problem for imperfect information games. If the subgame is solved locally, the opponent can alter his play in prior to this subgame to exploit our combined strategy. To prevent this, we introduce the notion of subgame margin, a simple value with appealing properties. If any best response reaches the subgame, the improvement of exploitability of the combined strategy is (at least) proportional to the subgame margin. This motivates subgame refinements resulting in large positive margins. Unfortunately, current techniques either neglect subgame margin (potentially leading to a large negative subgame margin and drastically more exploitable strategies), or guarantee only non-negative subgame margin (possibly producing the original, unrefined strategy, even if much stronger strategies are possible). Our technique remedies this problem by maximizing the subgame margin and is guaranteed to find the optimal solution. We evaluate our technique using one of the top participants of the AAAI-14 Computer Poker Competition, the leading playground for agents in imperfect information setting


Smooth UCT Search in Computer Poker

AAAI Conferences

They concluded that UCT quickly finds Self-play Monte Carlo Tree Search (MCTS) has a good but suboptimal policy, while Outcome Sampling initially been successful in many perfect-information twoplayer learns more slowly but converges to the optimal policy games. Although these methods have been over time. In this paper, we address the question whether the extended to imperfect-information games, so far inability of UCT to converge to a Nash equilibrium can be they have not achieved the same level of practical overcome while retaining UCT's fast initial learning rate. We success or theoretical convergence guarantees focus on the full-game MCTS setting, which is an important as competing methods. In this paper we step towards developing sound variants of online MCTS in introduce Smooth UCT, a variant of the established imperfect-information games. Upper Confidence Bounds Applied to Trees In particular, we introduce Smooth UCT, which combines (UCT) algorithm.