Poker


Heinrich

AAAI Conferences

Self-play Monte Carlo Tree Search (MCTS) has been successful in many perfect-information two-player games. Although these methods have been extended to imperfect-information games, so far they have not achieved the same level of practical success or theoretical convergence guarantees as competing methods. In this paper we introduce Smooth UCT, a variant of the established Upper Confidence Bounds Applied to Trees (UCT) algorithm. Smooth UCT agents mix in their average policy during self-play and the resulting planning process resembles game-theoretic fictitious play. When applied to Kuhn and Leduc poker, Smooth UCT approached a Nash equilibrium, whereas UCT diverged. In addition, Smooth UCT outperformed UCT in Limit Texas Hold'em and won 3 silver medals in the 2014 Annual Computer Poker Competition.


Online Poker - Bitcoin Z Texas Holdem Poker

#artificialintelligence

Online poker with the #1 Bitcoin Z poker game. Play poker online 24/7 with the official Bitcoin Z Poker game! Texas holdem, and more poker comming! Bitcoin Z Texas Holdem is the only place where players can win cryptomoney. Play now!


This New Poker Bot Can Beat Multiple Pros--at Once

#artificialintelligence

The 32-year-old is the only person to have won four World Poker Tour titles and has earned more than $7 million at tournaments. Despite his expertise, he learned something new this spring from an artificial intelligence bot. Elias was helping test new soft ware from researchers at Carnegie Mellon University and Facebook. He and another pro, Chris "Jesus" Ferguson, each played 5,000 hands over the internet in six-way games against five copies of a bot called Pluribus. At the end, the bot was ahead by a good margin.


These online courses teach you how to win at online poker

Mashable

TL;DR: The Ultimate Poker Pro Blueprint Mastery Bundle is on sale for £16.08 as of August 14, saving you 99% on list price. Playing poker online is a totally different game than playing in real life. You aren't playing other people so much as you are just playing the algorithm. Therefore, it requires a touch less skill and a touch more pattern recognition and smarts. In the Ultimate Poker Pro Blueprint Mastery Bundle, you'll learn exactly what it takes to win money playing poker online.


On Strategy Stitching in Large Extensive Form Multiplayer Games

Neural Information Processing Systems

Computing a good strategy in a large extensive form game often demands an extraordinary amount of computer memory, necessitating the use of abstraction to reduce the game size. Typically, strategies from abstract games perform better in the real game as the granularity of abstraction is increased. This paper investigates two techniques for stitching a base strategy in a coarse abstraction of the full game tree, to expert strategies in fine abstractions of smaller subtrees. We provide a general framework for creating static experts, an approach that generalizes some previous strategy stitching efforts. In addition, we show that static experts can create strong agents for both 2-player and 3-player Leduc and Limit Texas Hold'em poker, and that a specific class of static experts can be preferred among a number of alternatives.


Fictitious Play Outperforms Counterfactual Regret Minimization

arXiv.org Artificial Intelligence

In two-player zero-sum games a Nash equilibrium strategy is guaranteed to win (or tie) in expectation against any opposing strategy by the minimax theorem. In games with m ore than two players there can be multiple equilibria with different values to the players, and follow ing one has no performance guarantee; however, it was shown that a Nash equilibrium strategy defeated a variet y of agents submitted for a class project in a 3-player imperfect-information game, Kuhn poker [13]. Thi s demonstrates that Nash equilibrium strategies can be successful in practice despite the fact that they do no t have a performance guarantee. While Nash equilibrium can be computed in polynomial time fo r two-player zero-sum games, it is PPAD-hard to compute for nonzero-sum and games with 3 or mor e agents and widely believed that no efficient algorithms exist [8, 9]. Counterfactual regret mi nimization (CFR) is an iterative self-play procedure that has been proven to converge to Nash equilibrium in two-p layer zero-sum [28].


PokerBot: Create your poker AI bot in Python - Data Blogger

#artificialintelligence

In this tutorial, you will learn step-by-step how to implement a poker bot in Python. First, we need an engine in which we can simulate our poker bot. It also has a GUI available which can graphically display a game. Both the engine and the GUI have excellent tutorials on their GitHub pages in how to use them. The choice for the engine (and/or the GUI) is arbitrary and can be replaced by any engine (and/or GUI) you like.


What Online Poker Players Can Teach Us About AI

#artificialintelligence

Poker is considered a good challenge for AI, as it is seen as combination of mathematical/strategic play, and human intuition, especially about the strategies of others. I would consider the game a cross between the two extremes of technical vs. human skill: chess and rock-paper-scissors. In the game of chess, the technically superior player will almost always win, an amateur would lose literally 100% of their games to the top chess playing AI. In rock-paper-scissors, if the top AI plays the perfect strategy, of each option 1/3rd of the time, it will be unbeatable, but also by definition be incapable of beating anyone. To see why let's analyse how it plays against the Bart Simpson strategy: If your opponent always plays rock, you will play rock 1/3rd of the time, paper 1/3rd and scissors 1/3rd, meaning you will tie 1/3rd, win 1/3rd, and lose 1/3rd.


AI beats professionals in six-player poker

#artificialintelligence

The AI, called Pluribus, defeated poker professional Darren Elias, who holds the record for most World Poker Tour titles, and Chris "Jesus" Ferguson, winner of six World Series of Poker events. Each pro separately played 5,000 hands of poker against five copies of Pluribus. In another experiment involving 13 pros, all of whom have won more than $1 million playing poker, Pluribus played five pros at a time for a total of 10,000 hands and again emerged victorious. "Pluribus achieved superhuman performance at multi-player poker, which is a recognized milestone in artificial intelligence and in game theory that has been open for decades," said Tuomas Sandholm, Angel Jordan Professor of Computer Science, who developed Pluribus with Noam Brown, who is finishing his Ph.D. in Carnegie Mellon's Computer Science Department as a research scientist at Facebook AI. "Thus far, superhuman AI milestones in strategic reasoning have been limited to two-party competition. The ability to beat five other players in such a complicated game opens up new opportunities to use AI to solve a wide variety of real-world problems."


r/MachineLearning - AMA: We are Noam Brown and Tuomas Sandholm, creators of the Carnegie Mellon / Facebook multiplayer poker bot Pluribus. We're also joined by a few of the pros Pluribus played against. Ask us anything!

#artificialintelligence

You are right that the algorithms in Pluribus are totally different than reinforcement learning or MCTS. At a high level, that is because our settings are 1) games, that is, there is more than one player, and 2) of imperfect information, that is, when a player has to choose an action, the player does not know the entire state of the world. There is no good textbook on solving imperfect-information games. So, to read up on this literature, you will need to read research papers. Below in this post are selected papers from my research group that would be good to read given that you want to learn about this field.