AlphaGo Wins Final Game In Match Against Champion Go Player

IEEE Spectrum Robotics

AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.


AlphaGo, Deep Learning, and the Future of the Human Microscopist

#artificialintelligence

In March of last year, Google's (Menlo Park, California) artificial intelligence (AI) computer program AlphaGo beat the best Go player in the world, 18-time champion Lee Se-dol, in a tournament, winning 4 of 5 games.1 At first glance this news would seem of little interest to a pathologist, or to anyone else for that matter. After all, many will remember that IBM's (Armonk, New York) computer program Deep Blue beat Garry Kasparov--at the time the greatest chess player in the world--and that was 19 years ago. The rules of the several-thousand-year-old game of Go are extremely simple. The board consists of 19 horizontal and 19 vertical black lines.


A.I. is Now 10 Times Better Than a Pro Poker Player

#artificialintelligence

Scientists at the University of Alberta are cracking away at the complexities of artificial intelligence with their new "DeepStack" system, which can not only play a round of poker with you, but walk away with all of your money. This new technology builds upon the legacy of systems like IBM's Deep Blue, which was the first program to beat a world champion, Gary Kasparov, at chess in 1996. As Michael Bowling, co-author of the research and leader of the Computer Poker Research Group at Alberta, puts it: poker is the next big step for designing AI. In a game of Heads Up No Limit poker, DeepStack was able to win against professional poker players at a rate of 49 big blinds per 100. "We are winning by 49 per 100, that's like saying whatever the players were doing was not that much more effective than if they just folded every hand," Bowling tells Inverse.


Mind of the Machine: AlphaGo and Artificial Intelligence

#artificialintelligence

Recently, another chapter of man vs. machine played out. Google's Deep Mind project team tried out their state of the art algorithm on the game of Go. The Korean pro, Lee Sedol, a world champion several times over and arguably the best player of the game right now, was its opponent. To put it simply, this was the equivalent of Deep Blue v. Gary Kasparov, and as with the IBM Chess playing machine before it, AlphaGo took home the prize, four wins to one loss. Go has been thought to be the one game that computers could not beat a human at because a computer could not brute force the move trees.


A 'Brief' History of Game AI Up To AlphaGo, Part 1

#artificialintelligence

This is the first part of'A Brief History of Game AI Up to AlphaGo'. Part 2 is here and part 3 is here. In this part, we shall cover the birth of AI and the very first game-playing AI programs to run on digital computers. On March 9th of 2016, a historic milestone for AI was reached when the Google-engineered program AlphaGo defeated the world-class Go champion Lee Sedol. Go is a two-player strategy board game like Chess, but the larger number of possible moves and difficulty of evaluation make Go the harder problem for AI.