Despite losing at chess to the IBM Deep Blue computer more than 20 years ago, Garry Kasparov is a big believer in artificial intelligence. The former world chess champion is now an author and speaker who is trying to counter some of the more alarmist beliefs over the rise of AI technologies, typically exemplified in Hollywood movies in which robots rise against their human creators. Speaking at the Train AI conference on Thursday in San Francisco, Kasparov explained how humanity has long considered people's performance in playing a game of chess as a metric of intelligence. "People looked at it as an opportunity to go deep in the human mind," he said of chess. That's why when Kasparov lost to Deep Blue in 1997 in a rematch from a prior match he won in 1996 -- which, he likes to note, "nobody remembers" -- people considered it a "watershed moment" for computer science.
Last October, a computer system beat a professional human player at the ancient Chinese board game Go. The AI system, AlphaGo, was built by Google and trained using machine learning techniques. Google built the hardware that powered AlphaGo in-house, as it does with most of its infrastructure components. At the core of that hardware is the Tensor Processing Unit, or TPU, a chip Google designed specifically for its AI hardware, the company's CEO, Sundar Pichai, said from stage this morning during the opening Google I/O conference keynote next to Google headquarters in Mountain View, California. This is the first time Google has shared any information about the hardware backend that powers its AI, which will play a central role in the company's revamped cloud services strategy, announced earlier this year.
AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.
In the arms race between Silicon Valley giants to develop faster and more complex artificial intelligence capabilities, Google has a secret weapon: It's developing its own chips. At a conference for developers on Wednesday, chief executive Sundar Pichai said the tech giant had designed the chip, which the company says it's been using for over a year, specifically to improve its deep neural network. These networks are the brains that "learn" over time to to power features such as Gmail's "Smart Reply," and the ability to tag people in photos and search by voice. The chips were also in place when Google's AlphaGo computer program beat Go champion Lee Sedol in March, although the company didn't announce it at the time. As companies have increasingly focused on building tools that use machine learning as a backbone, they've also branched out into creating their own chips instead of purchasing them from major vendors, such as Nvidia.