The current most popular variant of poker, played in casinos and seen on television, is no-limit Texas hold'em. This game and a smaller variant, limit Texas hold'em, have been used as a testbed for artificial intelligence research since 1997. Since 2006, the Annual Computer Poker Competition has allowed researchers, programmers, and poker players to play their poker programs against each other, allowing us to find out which artificial intelligence techniques work best in practice. The competition has resulted in significant advances in fields such as computational game theory, and resulted in algorithms that can find optimal strategies for games six orders of magnitude larger than was possible using earlier techniques.
In 2017, a poker bot called Libratus made headlines when it roundly defeated four top human players at no-limit Texas Hold'Em. Now, Libratus's technology is being adapted to take on opponents of a different kind--in service of the US military. Libratus--Latin for balanced--was created by researchers from Carnegie Mellon University to test ideas for automated decision-making based on game theory. Early last year, the professor who led the project, Tuomas Sandholm, founded a startup called Strategy Robot to adapt his lab's game-playing technology for government use, such as in wargames and simulations used to explore military strategy and planning. Late in August, public records show, the company received a two-year contract of up to $10 million with the US Army.
As the great Kenny Rogers once said, a good gambler has to know when to hold'em and know when to fold'em. At the Rivers Casino in Pittsburgh this week, a computer program called Libratus may finally prove that computers can do this better than any human card player. Libratus is playing thousands of games of heads-up, or two-player, no-limit Texas hold'em against several expert professional poker players. Now a little more than halfway through the 20-day contest, Libratus is up by almost $800,000 against its human opponents. So victory, while far from guaranteed, may well be in the cards.
How do you beat a poker pro? Dr. Tuomas Sandholm has built an artificial intelligence poker bot to do just that. In a game that has more combinations than the number of atoms in the universe, this AI needed a supercomputer to work. Together, Pittsburgh Supercomputing Center and HPE provide the computing power needed to make this AI possible.
Although games of skill like Go and chess have long been touchstones for intelligence, programmers have gotten steadily better at crafting programs that can now beat even the best human opponents. Only recently, however, has artificial intelligence (AI) begun to successfully challenge humans in the much more popular (and lucrative) game of poker. Part of what makes poker difficult is that the luck of the draw in this card game introduces an intrinsic randomness (although randomness is also an element of games like backgammon, at which software has beaten humans for decades). More important, though, is that in the games where computers previously have triumphed, players have "perfect information" about the state of the play up until that point. "Randomness is not nearly as hard a problem," said Michael Bowling of the University of Alberta in Canada.
"In regular poker, to force betting, each person puts in an ante," Palansky said. "We've changed some tournaments where one person essentially pays everyone's ante at once. So, when you are in a particular spot at the table, you pay everyone's ante and the rest of the time you don't pay any ante at all. If the ante is a chip value of 100, that person may put in 900 for all nine players.
Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
Games have long been used as testbeds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
Michael Bowling has always loved games. When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta.
"This has been a huge collaborative effort from all involved and it is important to thank the elected leadership and regulatory authorities in Delaware, Nevada and New Jersey for their dedication and diligence to help move online poker forward," said Bill Rini, WSOP.com's head of online poker. "Everyone has had the end user in mind throughout this process, and as a result, we believe the United States, for the first time in a regulated environment, will have a large-scale multi-state offering that will propel the industry forward as soon as next month."
As a classic example of imperfect information games, Heads-Up No-limit Texas Holdem (HUNL), has been studied extensively in recent years. While state-of-the-art approaches based on Nash equilibrium have been successful, they lack the ability to model and exploit opponents effectively. This paper presents an evolutionary approach to discover opponent models based Long Short Term Memory neural networks and on Pattern Recognition Trees. Experimental results showed that poker agents built in this method can adapt to opponents they have never seen in training and exploit weak strategies far more effectively than Slumbot 2017, one of the cutting-edge Nash-equilibrium-based poker agents. In addition, agents evolved through playing against relatively weak rule-based opponents tied statistically with Slumbot in heads-up matches. Thus, the proposed approach is a promising new direction for building high-performance adaptive agents in HUNL and other imperfect information games.