Naysayers eat their words as Google's AI masters ancient game of Go

AITopics Original Links 

They said it couldn't be done, but Google's AI technology has proved them wrong by mastering the ancient Chinese game of Go roughly a decade earlier than anyone expected. Tapping neural networks and advanced "tree search" programs, researchers from Google DeepMind created a system called AlphaGo that takes a different approach to the game than had been tried before. In Go, the player's objective is to surround the opponent's pieces by alternately placing black and white pieces on a 19-by-19-line grid while simultaneously avoiding having one's own pieces surrounded. With more possible positions than there are atoms in the universe, Go has long been considered an ultimate challenge for artificial intelligence researchers. Traditional AI efforts to master Go have focused on using search trees, a computer science technique used for locating specific values from within a set. AlphaGo, on the other hand, uses the more advanced Monte Carlo tree search approach often used in game playing.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found