The current most popular variant of poker, played in casinos and seen on television, is no-limit Texas hold'em. This game and a smaller variant, limit Texas hold'em, have been used as a testbed for artificial intelligence research since 1997. Since 2006, the Annual Computer Poker Competition has allowed researchers, programmers, and poker players to play their poker programs against each other, allowing us to find out which artificial intelligence techniques work best in practice. The competition has resulted in significant advances in fields such as computational game theory, and resulted in algorithms that can find optimal strategies for games six orders of magnitude larger than was possible using earlier techniques.
You are right that the algorithms in Pluribus are totally different than reinforcement learning or MCTS. At a high level, that is because our settings are 1) games, that is, there is more than one player, and 2) of imperfect information, that is, when a player has to choose an action, the player does not know the entire state of the world. There is no good textbook on solving imperfect-information games. So, to read up on this literature, you will need to read research papers. Below in this post are selected papers from my research group that would be good to read given that you want to learn about this field.
AI has definitively beaten humans at another of our favorite games. A poker bot, designed by researchers from Facebook's AI lab and Carnegie Mellon University, has bested some of the world's top players in a series of games of six-person no-limit Texas Hold'em poker. Over 12 days and 10,000 hands, the AI system named Pluribus faced off against 12 pros in two different settings. In one, the AI played alongside five human players; in the other, five versions of the AI played with one human player (the computer programs were unable to collaborate in this scenario). Pluribus won an average of $5 per hand with hourly winnings of around $1,000 -- a "decisive margin of victory," according to the researchers.
Poker requires a skill that has always seemed uniquely human: the ability to be devious. To win, players must analyze how their opponents are playing and then trick them into handing over their chips. Such cunning, of course, comes pretty naturally to people. Now an AI program has, for the first time, shown itself capable of outwitting a whole table of poker pros using similar skills.
As Mr. Elias realized, Pluribus knew when to bluff, when to call someone else's bluff and when to vary its behavior so that other players couldn't pinpoint its strategy. "It does all the things the best players in the world do," said Mr. Elias, 32, who has won a record four titles on the World Poker Tour. "And it does a few things humans have a hard time doing." Experts believe the techniques that drive this and similar systems could be used in Wall Street trading, auctions, political negotiations and cybersecurity, activities that, like poker, involve hidden information. "You don't always know the state of the real world," said Noam Brown, the Facebook researcher who oversaw the Pluribus project.
Artificial intelligence has finally cracked the biggest challenge in poker: beating top professionals in six-player no-limit Texas Hold'Em, the most popular variant of the game. Over 20,000 hands of online poker, the AI beat fifteen of the world's top poker players, each of whom has won more than $1 million USD playing the game professionally. The AI, called Pluribus, was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of Pluribus played against one professional – and did better than the pros in both. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm at Carnegie Mellon University in the US. It is an improvement on their previous poker-playing AI, called Libratus, which in 2017 outplayed professionals at Heads-Up Texas Hold'Em, a variant of the game that pits two players head to head.
During one experiment, the poker bot Pluribus played against five professional players. During one experiment, the poker bot Pluribus played against five professional players. In artificial intelligence, it's a milestone when a computer program can beat top players at a game like chess. But a game like poker, specifically six-player Texas Hold'em, has been too tough for a machine to master -- until now. Researchers say they have designed a bot called Pluribus capable of taking on poker professionals in the most popular form of poker and winning.
It knows when to hold'em and when to fold'em. And, unlike in the old Kenny Rodgers ballad, it didn't need a grizzled cowboy gambler to teach it a trick or two. A poker bot has beaten a table full of pros at six-player, no-limit Texas Hold'em, the version of the game used by most tournaments, over the course of 10,000 hands of play. To master poker at this level, the A.I. learned entirely by playing millions of hands against itself, with no guidance from human card sharks. Among the players the bot, which is called Pluribus, beat were four-time World Poker Tour champion Darren Elias as well as World Series of Poker Main Event champions Chris "Jesus" Ferguson and Greg Merson.
Depth-limited look-ahead search is an essential tool for agents playing perfect-information games. In imperfect information games, the lack of a clear notion of a value of a state makes designing theoretically sound depth-limited solving algorithms substantially more difficult. Furthermore, most results in this direction only consider the domain of poker. We consider two-player zero-sum extensive form games in general. We provide a domain-independent definitions of optimal value functions and prove that they can be used for depth-limited look-ahead game solving. We prove that the minimal set of game states necessary to define the value functions is related to common knowledge of the players. We show the value function may be defined in several structurally different ways. None of them is unique, but the set of possible outputs is convex, which enables approximating the value function by machine learning models.
The AI poker-playing bot, which initially became popular after it hit a hot streak against top poker players in tournaments in 2017, has been reportedly "hired" by a Pentagon agency -- the Defense Innovation Unit, according to the report. After the computational game theory-based Libratus won more than $1.8 million in a poker championship, defeating four poker professionals, Tuomas Sandholm -- the head of the project under which the AI mechanism was created -- reportedly founded a startup called Strategy Robot, which is primarily aimed at adapting the poker AI for military use in such areas as simulations and planning. In June 2018 the US Department of Defense created the Joint Artificial Intelligence Centre (JAIC), designed to accelerate "the delivery of AI-enabled capabilities, scaling the Department-wide impact of AI and synchronizing DoD AI activities to expand Joint Force advantages." The centre will oversee around 600 AI projects and has a five-year, $1.7 billion budget, according to reports.
In 2017, a poker bot called Libratus made headlines when it roundly defeated four top human players at no-limit Texas Hold'Em. Now, Libratus' technology is being adapted to take on opponents of a different kind--in service of the US military. Libratus--Latin for balanced--was created by researchers from Carnegie Mellon University to test ideas for automated decisionmaking based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab's game-playing technology for government use, such as in war games and simulations used to explore military strategy and planning. Late in August, public records show, the company received a two-year contract of up to $10 million with the US Army.