Scientists at the University of Alberta are cracking away at the complexities of artificial intelligence with their new "DeepStack" system, which can not only play a round of poker with you, but walk away with all of your money. This new technology builds upon the legacy of systems like IBM's Deep Blue, which was the first program to beat a world champion, Gary Kasparov, at chess in 1996. As Michael Bowling, co-author of the research and leader of the Computer Poker Research Group at Alberta, puts it: poker is the next big step for designing AI. In a game of Heads Up No Limit poker, DeepStack was able to win against professional poker players at a rate of 49 big blinds per 100. "We are winning by 49 per 100, that's like saying whatever the players were doing was not that much more effective than if they just folded every hand," Bowling tells Inverse.
It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly.
That's the reason why I was shocked by a piece of news that came out of London on January 27 this year. AlphaGo, a program created by Google subsidiary DeepMind, defeated the European Go champion, five games to nothing. Maybe you think that's no big deal. After all, it's almost 20 years since IBM's Deep Blue beat Kasparov at chess in 1997. Chess is about logic; Go involves imagination and intuition.
Next week, scientists working on artificial intelligence (AI) and games will be watching the latest human-machine matchup. But instead of a single pensive player squaring off against a computer, a team of five top video game players will be furiously casting magic spells and lobbing (virtual) fireballs at a team of five AIs called OpenAI Five. They'll be playing the real-time strategy game Dota 2 at The International in Vancouver, Canada, an annual e-sports tournament that draws professional gamers who compete for millions of dollars. In 1997, IBM's Deep Blue AI bested chess champion Garry Kasparov. In 2016, DeepMind's AlphaGo AI beat Lee Sedol, a world master, at the traditional Chinese board game Go.
An invincible checkers-playing program named Chinook has solved a game whose origins date back several millennia, scientists reported Thursday on the journal Science's Web site. By playing out every possible move -- about 500 billion billion in all -- the computer proved it can never be beaten. Even if its opponent also played flawlessly, the outcome would be a draw. Chinook, created by computer scientists from the University of Alberta in 1989, wrapped up its work less than three months ago. In doing so, its programmers say the newly crowned checkers king has solved the most challenging game yet cracked by a machine -- even outdoing the chess-playing wizardry of IBM's Deep Blue.