There's been way too much fear-mongering news articles around the latest version of DeepMind's AlphaGo. Let's set the record straight, AlphaGo is an incredible technology and it's not terrifying at all. I'll go over the technical details of how AlphaGo really works; a mixture of deep learning and reinforcement learning. That's what keeps me going.
Just a few months ago, the social network thought that its AI experts were on the cusp of a breakthrough, making a computer that could play Go faster than any previous machine. Then Google came along and blew them out of the water, revealing first that it had built a Go computer capable of defeating a professional human player, and then going on to beat Lee Sedol, the greatest player of the last decade, 4-1 over the course of a week. Facebook has already tried to spoil Google's thunder once, with Mark Zuckerberg releasing a coincidentally timed statement on the company's Go progress just one day before Google announced its victory over the European champion Fan Hui (and one day after Google had already revealed to the press that the victory had occurred). Zuckerberg himself has been more conciliatory this time round, posting after a message of congratulations after AlphaGo's third victory in a row: "Congrats to the Google DeepMind team on this historic milestone in AI research – a third straight victory over Go grandmaster Lee Sedol. We live in exciting times."
DeepMind's AlphaGo Zero algorithm beat the best Go player in the world by training entirely by self-play. It played against itself repeatedly, getting better over time with no human gameplay input. AlphaGo Zero was a remarkable moment in AI history, a moment that will always be remembered. Move 37 in particular is worthy of many philosophical debates. You'll see what I mean and get a technical overview of its neural components (code animations) in this video.
It's only March and already we've seen a computer beat a Go grandmaster and a self-driving car crash into a bus. The world is waking up to the ways in which a combination of "deep learning" artificial intelligence and robotics will take over most jobs. But if we don't want our robot servants to rise up and kill us in our beds, maybe we should delete the video of us beating their grandparents with hockey sticks. Thanks to science fiction, we know that the first thing AI will do is take over the defence grid and nuke us all. In Harlan Ellison's 1967 story I Have No Mouth, and I Must Scream – one of the most brutal depictions of an AI-dominated world – an AI called AM, constructed to fight a nuclear war, kills off most of the human race, keeping five people as playthings.
AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.