Go


DeepMind's newest AI learns by itself and creates its own knowledge

#artificialintelligence

A couple of month's ago Google's Artificial Intelligence (AI) group, DeepMind, unveiled the latest incarnation of its Go playing program, AlphaGo Zero, an AI so powerful that it managed to cram thousands of years of human knowledge of playing the game, before inventing better moves of its own, into just three days. Hailed as a major breakthrough in AI learning because, unlike previous versions of AlphaGo, which went on to beat the world Go champion as well as take the Go online player community to the cleaners, AlphaGo Zero mastered the ancient Chinese board game from nothing more than a clean slate, with no more help from humans than being told the rules of the game. However, and as if that wasn't already impressive enough, it took its predecessor, AlphaGo, the AI that famously beat Lee Sedol, the South Korean grandmaster, to the cleaners as well, hammering it 100 games to nil. AlphaGo Zero's ability to learn for itself, and without human input, is a milestone on the road to one day realising Artificial General Intelligence (AGI), something that the same company, DeepMind, published an architecture for last year, and it will undoubtedly help us create the next generation of more "general" AI's that can do a lot more than just thrash humans at board games. AlphaGo Zero amassed its impressive skills using a technique called Reinforcement Learning, and at the heart of the program are a group of software "neurons" that are connected together to form a digital neural network.


Google's AlphaGo AI wins three-match series against the world's best Go player – TechCrunch

#artificialintelligence

Google's AlphaGo AI has once again made the case that machines are now smarter than man -- when it comes to games of strategy, at least. AlphaGo made its name last year when it defeated high-profile Go player Lee Sedol 4-1, but now it has beaten the world's best player of Go, the hugely complex ancient strategy game. Today, it won against Go world champion Ke Jie to clinch a second, decisive win of a three-part series that is taking place in China this week. "I'm putting my hand on my chest, because I thought I had a chance. I thought I was very close to winning the match in the middle of the game, but that might not have been what AlphaGo was thinking.


AlphaGo Zero: The Most Significant Research Advance in AI

@machinelearnbot

Recently Google DeepMind program AlphaGo Zero achieved superhuman level without any help - entirely by self-play! Here is the Nature paper explaining technical details (also PDF version: Mastering the Game of Go without Human Knowledge) One of the main reasons for success was the use of a novel form of Reinforcement learning in which AlphaGo learned by playing itself. The system starts with a neural net that does not know anything about Go. It plays millions of games against itself and tuned the neural network to predict next move and the eventual winner of the games. The updated neural network was merged with the Monte Carlo Tree Search algorithm to create a new and stronger version of AlphaGo Zero, and the process resumed.



AlphaGo, Reinforcement Learning, and the Future of Artificial Intelligence

#artificialintelligence

Last year, Google Deepmind took a giant step forward in proving the value of deep learning when the latest version of their Go-playing computer program, AlphaGo Zero, beat the previous model after only three days of self-training.


Google AI Achieves "Alien" Superhuman Mastery of Chess and Go in Mere Hours - The New Stack

#artificialintelligence

News of a specialized computer program beating human champions at games like chess and Go might not surprise people as much as it might have before, as it did when Deep Blue beat world chess champ Garry Kasparov back in 1997, or even more recently when Google DeepMind's AI AlphaGo beat Lee Sedol in a stunning upset back in 2016.


AlphaGo Official Trailer

#artificialintelligence

AlphaGo chronicles a journey from the halls of Cambridge, through the backstreets of Bordeaux, past the coding terminals of DeepMind, to Seoul, where a legendary Go master faces an unproven AI challenger. As the drama unfolds, questions emerge: What can artificial intelligence reveal about a 3000-year-old game? What will it teach us about humanity?


AlphaGo Zero: Approaching Perfection – Synced – Medium

#artificialintelligence

DeepMind recently published a paper in Nature introducing the latest evolution of its AI-powered Go program. "AlphaGo Zero" learns in self-play games, with no human knowledge required. The program crushed previous "AlphaGo" versions (including the one that beat world-best human Ke Jie) with a record of 100 wins and zero losses, stimulating discussion in the Go and AI communities.



AlphaGo Zero: The Most Significant Research Advance in AI

@machinelearnbot

Recently Google DeepMind program AlphaGo Zero achieved superhuman level without any help - entirely by self-play! Here is the Nature paper explaining technical details (also PDF version: Mastering the Game of Go without Human Knowledge) One of the main reasons for success was the use of a novel form of Reinforcement learning in which AlphaGo learned by playing itself. The system starts with a neural net that does not know anything about Go. It plays millions of games against itself and tuned the neural network to predict next move and the eventual winner of the games. The updated neural network was merged with the Monte Carlo Tree Search algorithm to create a new and stronger version of AlphaGo Zero, and the process resumed.