First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move's quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms. The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind's AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here? Following Kasparov's defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game.
Jul-21-2016, 10:07:50 GMT