Go looks simple, deceptively so. The Chinese board game is played on a board with a grid of 19x19 lines. The object is for two players to alternately place black and white markers on vacant intersections of those lines. And now, this nearly 3,000-year-old board game is a frontier of Artificial Intelligence development. At the time of writing, Google's DeepMind AI's AlphaGo program has played four games of a five game series against Go world champion, South Korea's Lee se-Dol.
The recent feat achieved by AlphaGo was a marvel of striking progress in artificial intelligence (AI) technology. AlphaGo, an AI-based computer program developed by a British corporation under the umbrella of Google Inc. of the United States, has won against the world's top Go player, South Korea's Lee Se Dol, 4-1. Previously, AI programs had defeated skilled human players in the fields of chess and shogi. However, it was said that it would take 10 years to see an AI system win against human players in the world of Go. It was cited as a high hurdle that the surface of a Go board is broad, and that there are an immeasurable number of choices for moves to be made in playing a match.
When a person's intelligence is tested, there are exams. When artificial intelligence is tested, there are games. But what happens when computer programs beat humans at all of those games? This is the question AI experts must ask after a Google-developed program called AlphaGo defeated a world champion Go player in four out of five matches in a series that concluded Tuesday. Long a yardstick for advances in AI, the era of board game testing has come to an end, said Murray Campbell, an IBM research scientist who was part of the team that developed Deep Blue, the first computer program to beat a world chess champion.
To the editor: I was dismayed by the article on Google DeepMind's computer. It was further evidence of how the media's naivete regarding the term "artificial intelligence," or AI, has totally corrupted its meaning. DeepMind, as well as IBM's legendary Jeopardy super-champion Watson and numerous other cited AI systems all have the intelligence of a rock. The intelligence of these systems lies in the human intelligence of the programmers that created the systems, not in the systems themselves. The generally accepted test for true AI is the Lovelace test, which was created, in partnership, by David Ferrucci, who not incidentally was the head of IBM's Watson development team.
It was hailed as the most significant test of machine intelligence since Deep Blue defeated Garry Kasparov in chess nearly 20 years ago. Google's AlphaGo has won two of the first three games against grandmaster Lee Sedol in a Go tournament, showing the dramatic extent to which AI has improved over the years. That fateful day when machines finally become smarter than humans has never appeared...