Poker


What Online Poker Players Can Teach Us About AI

#artificialintelligence

Poker is considered a good challenge for AI, as it is seen as combination of mathematical/strategic play, and human intuition, especially about the strategies of others. I would consider the game a cross between the two extremes of technical vs. human skill: chess and rock-paper-scissors. In the game of chess, the technically superior player will almost always win, an amateur would lose literally 100% of their games to the top chess playing AI. In rock-paper-scissors, if the top AI plays the perfect strategy, of each option 1/3rd of the time, it will be unbeatable, but also by definition be incapable of beating anyone. To see why let's analyse how it plays against the Bart Simpson strategy: If your opponent always plays rock, you will play rock 1/3rd of the time, paper 1/3rd and scissors 1/3rd, meaning you will tie 1/3rd, win 1/3rd, and lose 1/3rd.


AI beats professionals in six-player poker

#artificialintelligence

The AI, called Pluribus, defeated poker professional Darren Elias, who holds the record for most World Poker Tour titles, and Chris "Jesus" Ferguson, winner of six World Series of Poker events. Each pro separately played 5,000 hands of poker against five copies of Pluribus. In another experiment involving 13 pros, all of whom have won more than $1 million playing poker, Pluribus played five pros at a time for a total of 10,000 hands and again emerged victorious. "Pluribus achieved superhuman performance at multi-player poker, which is a recognized milestone in artificial intelligence and in game theory that has been open for decades," said Tuomas Sandholm, Angel Jordan Professor of Computer Science, who developed Pluribus with Noam Brown, who is finishing his Ph.D. in Carnegie Mellon's Computer Science Department as a research scientist at Facebook AI. "Thus far, superhuman AI milestones in strategic reasoning have been limited to two-party competition. The ability to beat five other players in such a complicated game opens up new opportunities to use AI to solve a wide variety of real-world problems."


r/MachineLearning - AMA: We are Noam Brown and Tuomas Sandholm, creators of the Carnegie Mellon / Facebook multiplayer poker bot Pluribus. We're also joined by a few of the pros Pluribus played against. Ask us anything!

#artificialintelligence

You are right that the algorithms in Pluribus are totally different than reinforcement learning or MCTS. At a high level, that is because our settings are 1) games, that is, there is more than one player, and 2) of imperfect information, that is, when a player has to choose an action, the player does not know the entire state of the world. There is no good textbook on solving imperfect-information games. So, to read up on this literature, you will need to read research papers. Below in this post are selected papers from my research group that would be good to read given that you want to learn about this field.


Facebook and CMU's 'superhuman' poker AI beats human pros

#artificialintelligence

AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook's AI lab and Carnegie Mellon University, has bested some of the world's top players in a series of games of six-person no-limit Texas Hold'em poker. Over 12 days and 10,000 hands, the AI system named Pluribus faced off against 12 pros in two different settings. In one, the AI played alongside five human players; in the other, five versions of the AI played with one human player (the computer programs were unable to collaborate in this scenario). Pluribus won an average of $5 per hand with hourly winnings of around $1,000 -- a "decisive margin of victory," according to the researchers.


Facebook's new poker-playing AI could wreck the online poker industry--so it's not being released

#artificialintelligence

Poker requires a skill that has always seemed uniquely human: the ability to be devious. To win, players must analyze how their opponents are playing and then trick them into handing over their chips. Such cunning, of course, comes pretty naturally to people. Now an AI program has, for the first time, shown itself capable of outwitting a whole table of poker pros using similar skills.


Hold 'Em or Fold 'Em? This A.I. Bluffs With the Best

#artificialintelligence

As Mr. Elias realized, Pluribus knew when to bluff, when to call someone else's bluff and when to vary its behavior so that other players couldn't pinpoint its strategy. "It does all the things the best players in the world do," said Mr. Elias, 32, who has won a record four titles on the World Poker Tour. "And it does a few things humans have a hard time doing." Experts believe the techniques that drive this and similar systems could be used in Wall Street trading, auctions, political negotiations and cybersecurity, activities that, like poker, involve hidden information. "You don't always know the state of the real world," said Noam Brown, the Facebook researcher who oversaw the Pluribus project.


AI beats professionals at six-player Texas Hold 'Em poker

New Scientist

Artificial intelligence has finally cracked the biggest challenge in poker: beating top professionals in six-player no-limit Texas Hold'Em, the most popular variant of the game. Over 20,000 hands of online poker, the AI beat fifteen of the world's top poker players, each of whom has won more than $1 million USD playing the game professionally. The AI, called Pluribus, was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of Pluribus played against one professional – and did better than the pros in both. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm at Carnegie Mellon University in the US. It is an improvement on their previous poker-playing AI, called Libratus, which in 2017 outplayed professionals at Heads-Up Texas Hold'Em, a variant of the game that pits two players head to head.


Bet On The Bot: AI Beats The Professionals At 6-Player Texas Hold 'Em

NPR Technology

During one experiment, the poker bot Pluribus played against five professional players. During one experiment, the poker bot Pluribus played against five professional players. In artificial intelligence, it's a milestone when a computer program can beat top players at a game like chess. But a game like poker, specifically six-player Texas Hold'em, has been too tough for a machine to master -- until now. Researchers say they have designed a bot called Pluribus capable of taking on poker professionals in the most popular form of poker and winning.


A new poker bot can beat a table stacked with pros. Is business next?

#artificialintelligence

It knows when to hold'em and when to fold'em. And, unlike in the old Kenny Rodgers ballad, it didn't need a grizzled cowboy gambler to teach it a trick or two. A poker bot has beaten a table full of pros at six-player, no-limit Texas Hold'em, the version of the game used by most tournaments, over the course of 10,000 hands of play. To master poker at this level, the A.I. learned entirely by playing millions of hands against itself, with no guidance from human card sharks. Among the players the bot, which is called Pluribus, beat were four-time World Poker Tour champion Darren Elias as well as World Series of Poker Main Event champions Chris "Jesus" Ferguson and Greg Merson.


Value Functions for Depth-Limited Solving in Zero-Sum Imperfect-Information Games

arXiv.org Artificial Intelligence

Depth-limited look-ahead search is an essential tool for agents playing perfect-information games. In imperfect information games, the lack of a clear notion of a value of a state makes designing theoretically sound depth-limited solving algorithms substantially more difficult. Furthermore, most results in this direction only consider the domain of poker. We consider two-player zero-sum extensive form games in general. We provide a domain-independent definitions of optimal value functions and prove that they can be used for depth-limited look-ahead game solving. We prove that the minimal set of game states necessary to define the value functions is related to common knowledge of the players. We show the value function may be defined in several structurally different ways. None of them is unique, but the set of possible outputs is convex, which enables approximating the value function by machine learning models.