Facebook continues its efforts to create artificial intelligence capable of outclassing all humans at the ancient Chinese strategy board game Go. The social media company recently published a research paper showcasing the progress it made with the DarkForest bots, which use a synergy of methods to be the best Go players available. Yuandong Tian and Yan Zhu, AI researchers at Facebook, explain how the computer program behaves in the abstract of the paper. "Against human players, [darkfores2 achieves] a stable 3d level on KGS Go Server as a ranked bot," the duo points out [pdf]. This is a visible improvement over the predicted 4k-5k ranks for DCNN that Clark & Storkey (2015) reported after studying matches against other machine players.
AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.
South Korean Go players will be banned from using smartphones during official tournaments in the future, and it's all thanks to Google's AlphaGo AI. The Korea Times reports that the Korea Baduk Association -- baduk being the local name for Go -- is currently drafting new rules that will outlaw smartphone use in matches. While the organization is fully aware you can't carry AlphaGo around in your pocket at the moment, it's preempting a time when certain AI tools that can give players a competitive edge do become available on smartphones. It may seem strange that smartphone use is permitted in official Go competitions as it stands, but then there's basically no precedent for digital tools being of any help to experienced players. Though IBM's Deep Blue chess computer beat world champ Garry Kasparov in 1997, the number of variables and strategic complexity of Go have kept programmers from creating bots that exhibit anything more than an amateur skill level.
A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here--it's about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines' backs. It is surprisingly warm inside, as if the cluster were alive. Today's Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred "instances" of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers.
This is the first part of'A Brief History of Game AI Up to AlphaGo'. Part 2 is here and part 3 is here. In this part, we shall cover the birth of AI and the very first game-playing AI programs to run on digital computers. On March 9th of 2016, a historic milestone for AI was reached when the Google-engineered program AlphaGo defeated the world-class Go champion Lee Sedol. Go is a two-player strategy board game like Chess, but the larger number of possible moves and difficulty of evaluation make Go the harder problem for AI.