AI Game-Playing Techniques

AI Magazine

In conjunction with the Association for the Advancement of Artificial Intelligence's Hall of Champions exhibit, the Innovative Applications of Artificial Intelligence held a panel discussion entitled "AI Game-Playing Techniques: Are They Useful for Anything Other Than Games?" This article summarizes the panelists' comments about whether ideas and techniques from AI game playing are useful elsewhere and what kinds of game might be suitable as "challenge problems" for future research.


A.I. is Now 10 Times Better Than a Pro Poker Player

#artificialintelligence

Scientists at the University of Alberta are cracking away at the complexities of artificial intelligence with their new "DeepStack" system, which can not only play a round of poker with you, but walk away with all of your money. This new technology builds upon the legacy of systems like IBM's Deep Blue, which was the first program to beat a world champion, Gary Kasparov, at chess in 1996. As Michael Bowling, co-author of the research and leader of the Computer Poker Research Group at Alberta, puts it: poker is the next big step for designing AI. In a game of Heads Up No Limit poker, DeepStack was able to win against professional poker players at a rate of 49 big blinds per 100. "We are winning by 49 per 100, that's like saying whatever the players were doing was not that much more effective than if they just folded every hand," Bowling tells Inverse.


Mind of the Machine: AlphaGo and Artificial Intelligence

#artificialintelligence

Recently, another chapter of man vs. machine played out. Google's Deep Mind project team tried out their state of the art algorithm on the game of Go. The Korean pro, Lee Sedol, a world champion several times over and arguably the best player of the game right now, was its opponent. To put it simply, this was the equivalent of Deep Blue v. Gary Kasparov, and as with the IBM Chess playing machine before it, AlphaGo took home the prize, four wins to one loss. Go has been thought to be the one game that computers could not beat a human at because a computer could not brute force the move trees.


AlphaGo Zero Goes From Rank Beginner to Grandmaster in Three Days--Without Any Help

IEEE Spectrum Robotics

In the 1970 sci-fi thriller Colossus: The Forbin Project, a computer designed to control the United States' nuclear weapons is switched on, and immediately discovers the existence of a Soviet counterpart.


Google DeepMind's program beats human at Go

AITopics Original Links

Black-and-white pieces occupy spaces on a board during a game of Go, which Google's software engineers say they've taught a computer program to play better than most humans. Google's software engineers have taught a computer program to beat almost any human at an ancient and highly complex Chinese strategy game known as "Go." While computers have largely mastered checkers and chess, Go, considered the oldest board game still played, is far more complicated. There are more possible positions in the game than are atoms in the universe, Google said -- an "irresistible" challenge for the company's DeepMind engineers, who used artificial intelligence to enable the program to learn from repeat games. The Google unit's AlphaGo computer program is much more sophisticated than the IBM-created Deep Blue computer that in 1996 won the first chess game against a reigning world champion, Garry Kasparov.