Just a few months ago, the social network thought that its AI experts were on the cusp of a breakthrough, making a computer that could play Go faster than any previous machine. Then Google came along and blew them out of the water, revealing first that it had built a Go computer capable of defeating a professional human player, and then going on to beat Lee Sedol, the greatest player of the last decade, 4-1 over the course of a week. Facebook has already tried to spoil Google's thunder once, with Mark Zuckerberg releasing a coincidentally timed statement on the company's Go progress just one day before Google announced its victory over the European champion Fan Hui (and one day after Google had already revealed to the press that the victory had occurred). Zuckerberg himself has been more conciliatory this time round, posting after a message of congratulations after AlphaGo's third victory in a row: "Congrats to the Google DeepMind team on this historic milestone in AI research – a third straight victory over Go grandmaster Lee Sedol. We live in exciting times."
There's been way too much fear-mongering news articles around the latest version of DeepMind's AlphaGo. Let's set the record straight, AlphaGo is an incredible technology and it's not terrifying at all. I'll go over the technical details of how AlphaGo really works; a mixture of deep learning and reinforcement learning. That's what keeps me going.
Something strange happened in the world of artificial intelligence (AI) on Wednesday. Facebook CEO Mark Zuckerberg posted on his Facebook profile that his company has created an AI system that is "getting close" to beating the best humans at Chinese board game Go. Hours later, DeepMind -- a startup based in London that was bought by Google for 400 million in 2014 -- said it had already developed an AI named AlphaGo that had just beaten the best Go player in Europe. DeepMind's breakthrough was splashed across the front cover of science journal Nature yesterday evening and covered by over 200 media titles. "This is the first time that a computer Go program has defeated a human professional player, without handicap, in the full game of Go - a feat that was previously believed to be at least a decade away," explained the DeepMind research paper -- Mastering the game of Go with deep neural networks and tree search.
DeepMind's AlphaGo Zero algorithm beat the best Go player in the world by training entirely by self-play. It played against itself repeatedly, getting better over time with no human gameplay input. AlphaGo Zero was a remarkable moment in AI history, a moment that will always be remembered. Move 37 in particular is worthy of many philosophical debates. You'll see what I mean and get a technical overview of its neural components (code animations) in this video.