While serving a nine-month stint at Google, Sergey Levine watched as the company's AlphaGo program defeated the world's best human player of the ancient Chinese game Go in March. Levine, a robotics specialist at the University of California, Berkeley, admired the sophisticated feat of machine learning but couldn't help focusing on a notable shortcoming of the powerful Go-playing algorithms. "They never picked up any of the pieces themselves," he jokes. One way that the creators of AlphaGo trained the program was by feeding 160,000 previous games of Go to a powerful algorithm called a neural network, much the way similar algorithms have been shown countless labeled pictures of cats and dogs until they learn to recognize the animals in unlabeled photos. So roboticists have instead turned to a different technique: the scientist gives a robot a goal, such as screwing a cap onto a bottle, but relies on the machine to figure out the specifics itself.
Last October, a computer system beat a professional human player at the ancient Chinese board game Go. The AI system, AlphaGo, was built by Google and trained using machine learning techniques. Google built the hardware that powered AlphaGo in-house, as it does with most of its infrastructure components. At the core of that hardware is the Tensor Processing Unit, or TPU, a chip Google designed specifically for its AI hardware, the company's CEO, Sundar Pichai, said from stage this morning during the opening Google I/O conference keynote next to Google headquarters in Mountain View, California. This is the first time Google has shared any information about the hardware backend that powers its AI, which will play a central role in the company's revamped cloud services strategy, announced earlier this year.
In the arms race between Silicon Valley giants to develop faster and more complex artificial intelligence capabilities, Google has a secret weapon: It's developing its own chips. At a conference for developers on Wednesday, chief executive Sundar Pichai said the tech giant had designed the chip, which the company says it's been using for over a year, specifically to improve its deep neural network. These networks are the brains that "learn" over time to to power features such as Gmail's "Smart Reply," and the ability to tag people in photos and search by voice. The chips were also in place when Google's AlphaGo computer program beat Go champion Lee Sedol in March, although the company didn't announce it at the time. As companies have increasingly focused on building tools that use machine learning as a backbone, they've also branched out into creating their own chips instead of purchasing them from major vendors, such as Nvidia.
AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.