Extensive games are often used to model the interactions of multiple agents within an environment. Much recent work has focused on increasing the size of an extensive game that can be feasibly solved. Despite these improvements, many interesting games are still too large for such techniques. A common approach for computing strategies in these large games is to first employ an abstraction technique to reduce the original game to an abstract game that is of a manageable size. This abstract game is then solved and the resulting strategy is used in the original game. Most top programs in recent AAAI Computer Poker Competitions use this approach. The trend in this competition has been that strategies found in larger abstract games tend to beat strategies found in smaller abstract games. These larger abstract games have more expressive strategy spaces and therefore contain better strategies. In this paper we present a new method for computing strategies in large games. This method allows us to compute more expressive strategies without increasing the size of abstract games that we are required to solve. We demonstrate the power of the approach experimentally in both small and large games, while also providing a theoretical justification for the resulting improvement.
Among the many achievements of machine learning in recent years, some of the most striking are the victories of the machine against human players in games, such as Google's DeepMind group's conquest of Go in 2016. In such milestones, researchers are often guided by theoretical math that says there can be an optimal strategy to be found, given a good algorithm and enough compute. But what do you do when theory breaks down? Two researchers at Carnegie Mellon University and Facebook went back to the drawing board to solve "heads-up no-limit Texas hold'em," the most popular form of multiplayer poker in the world. Theory isn't computable for this form of the card game, so they designed some elegant search strategies for their computer program, "Pluribus," to beat the best human players in 10,000 hands of poker.
Carnegie Mellon's No-Limit Texas Hold'em software made short work of four of the world's best professional poker players in Pittsburgh at the grueling "Brains vs. Artificial Intelligence" poker tournament. Poker now joins chess, Jeopardy, go, and many other games at which programs outplay people. But poker is different from all the others in one big way: players have to guess based on partial, or "imperfect" information. "Chess and Go are games of perfect information," explains Libratus co-creator Noam Brown, a Ph.D. candidate at Carnegie Mellon. "All the information in the game is available for both sides to see.
The world's best artificial intelligence poker player seems to know exactly when to hold'em and when to fold'em. An artificial-intelligence program known as Libratus has beat the world's absolute best human poker players in a 20-day No-Limit Texas Hold'em tournament, defeating four opponents by about $1.77 million in poker chips, according to Pittsburgh's Rivers Casino, where the "Brains vs. Artificial Intelligence" poker tournament was held. At the end of each day, at least one of the human players was beating the AI program. But in the end, it was not enough. "We appreciate their hard work, but unfortunately, the computer won," said Craig Clark, general manager of Rivers Casino.
Beating expert poker players differs from past AI successes against human competitors in games such as Jeopardy and Go. Researchers behind a poker-playing AI system called DeepStack say it's the first algorithm to have ever beaten poker pros in heads-up no-limit Texas hold'em. The claim, if verified, would mark a major milestone in the development of artificial-intelligence systems. Beating expert poker players differs from past AI successes against human competitors in games such as Jeopardy and Go because each player's hand provides only an incomplete picture about the state of play and requires a program to navigate tactics, such as bluffing, based on asymmetrical information. DeepStack is the work of a collaboration between researchers at the University of Alberta and two Czech universities, who say in a new non-peer reviewed paper that it's the "first computer program to beat professional poker players in heads-up no-limit Texas hold'em".