An artificial intelligence has beaten eight world champions at bridge, a game in which human supremacy has resisted the march of the machines until now. The victory represents a new milestone for AI because in bridge players work with incomplete information and must react to the behaviour of several other players – a scenario far closer to human decision-making. In contrast, chess and Go – in both of which AIs have already beaten human champions – a player has a single opponent at a time and both are in possession of all the information. "What we've seen represents a fundamentally important advance in the state of artificial intelligence systems," said Stephen Muggleton, a professor of machine learning at Imperial College London. French startup NukkAI announced the news of its AI's victory on Friday, at the end of a two-day tournament in Paris.
From the very dawn of the field, search with value functions was a fundamental concept of computer games research. Turing's chess algorithm from 1950 was able to think two moves ahead, and Shannon's work on chess from $1950$ includes an extensive section on evaluation functions to be used within a search. Samuel's checkers program from 1959 already combines search and value functions that are learned through self-play and bootstrapping. TD-Gammon improves upon those ideas and uses neural networks to learn those complex value functions -- only to be again used within search. The combination of decision-time search and value functions has been present in the remarkable milestones where computers bested their human counterparts in long standing challenging games -- DeepBlue for Chess and AlphaGo for Go. Until recently, this powerful framework of search aided with (learned) value functions has been limited to perfect information games. As many interesting problems do not provide the agent perfect information of the environment, this was an unfortunate limitation. This thesis introduces the reader to sound search for imperfect information games.
U of Alberta created the first Computing Science department in Canada in 1964. It has a long tradition of research in AI (is rated 3rd in the world in machine learning). It has also led in the development of AI for strategy games. The results can be commercialized in non-game applications as well. Among these are Checkers, Chess, Go and Poker, The evening's talks were by Jonathan Schaeffer (computer chess) and Ryan Hayward (the strategy game Hex).
When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta. But, in his heart, Bowling still loves playing games.
While in 1996 Garry Kasparov won 4 to 2 against IBM s super-computer Deep Blue, in 1997 Deep Blue won against Garry Kasparov. This marked a milestone in the process of computers being able to go on learning and getting more intelligent. Now, 20 years later, we all should realize that Artificial Intelligence is taking over our lives. In March 2017, for the very first time, Artificial Intelligence won Heads-Up No-Limit Texas Hold'em against 33 poker players from 17 different nations. At the University of Alberta in Canada, Matej Moravcik and his team created an Artificial Intelligence machine they call "DeepStack."
It is no mystery why poker is such a popular pastime: the dynamic card game produces drama in spades as players are locked in a complicated tango of acting and reacting that becomes increasingly tense with each escalating bet. The same elements that make poker so entertaining have also created a complex problem for artificial intelligence (AI). A study published today in Science describes an AI system called DeepStack that recently defeated professional human players in heads-up, no-limit Texas hold'em poker, an achievement that represents a leap forward in the types of problems AI systems can solve. DeepStack, developed by researchers at the University of Alberta, relies on the use of artificial neural networks that researchers trained ahead of time to develop poker intuition. During play, DeepStack uses its poker smarts to break down a complicated game into smaller, more manageable pieces that it can then work through on the fly.
Scientists at the University of Alberta are cracking away at the complexities of artificial intelligence with their new "DeepStack" system, which can not only play a round of poker with you, but walk away with all of your money. This new technology builds upon the legacy of systems like IBM's Deep Blue, which was the first program to beat a world champion, Gary Kasparov, at chess in 1996. As Michael Bowling, co-author of the research and leader of the Computer Poker Research Group at Alberta, puts it: poker is the next big step for designing AI. In a game of Heads Up No Limit poker, DeepStack was able to win against professional poker players at a rate of 49 big blinds per 100. "We are winning by 49 per 100, that's like saying whatever the players were doing was not that much more effective than if they just folded every hand," Bowling tells Inverse.
An invincible checkers-playing program named Chinook has solved a game whose origins date back several millennia, scientists reported Thursday on the journal Science's Web site. By playing out every possible move -- about 500 billion billion in all -- the computer proved it can never be beaten. Even if its opponent also played flawlessly, the outcome would be a draw. Chinook, created by computer scientists from the University of Alberta in 1989, wrapped up its work less than three months ago. In doing so, its programmers say the newly crowned checkers king has solved the most challenging game yet cracked by a machine -- even outdoing the chess-playing wizardry of IBM's Deep Blue.