DeepMind, a research lab that was acquired by Google for £400 million, has become a well known entity in the field of artificial intelligence (AI) for building agents that can learn and master games such as arcade classic "Space Invaders" and the ancient Chinese board game of "Go". Over the last year, the five-year-old company, which employs approximately 250 people in London, has been branching out and applying its self-learning algorithms to fields such as healthcare and energy. On the latter, it's helped Google to slash the electricity bill in its data centres worldwide and it's now exploring how it can help the National Grid to predict demand. But Demis Hassabis, DeepMind's cofounder and CEO, announced on Sunday that the company isn't about to turn its back on the gaming field any time soon. In fact, Hassabis wrote on Twitter that DeepMind has been busy improving the AlphaGo [AG] agent that beat Lee SeDol, the world's best Go player, earlier this year.
The 2016 victory by a Google-built AI at the notoriously complex game of Go was a bold demonstration of the power of modern machine learning. That triumphant AlphaGo system, created by AI research group Google DeepMind, confounded expectations that computers were years away from beating a human champion. But as significant as that achievement was, DeepMind's co-founder Demis Hassabis expects it will be dwarfed by how AI will transform society in the years to come. "I would actually be very pessimistic about the world if something like AI wasn't coming down the road," he said. "The reason I say that is that if you look at the challenges that confront society: climate change, sustainability, mass inequality -- which is getting worse -- diseases, and healthcare, we're not making progress anywhere near fast enough in any of these areas.
Microsoft's Tay shows that if we treat newborn AI programs as mature, they can be instantly corrupted. If we don't instill ethics or morals into newly created bots, just as we do with our children, they will digest and spit back the worst of humanity unthinkingly. And while artificially intelligent bots may not deliberately start shooting to kill, they could unintentionally precipitate human disasters, say, a genocide, because of a lack of ethical principles. The time has come to consider who will be the guardian of AI. This is not the first time the debate about ethics of AI has surfaced.
A computer that taught itself to play almost 50 video games including Space Invaders and Pong is being hailed as the pinnacle of artificial intelligence. But it is unlikely to spark the Terminator-like Armageddon predicted in recent months by technology entrepreneur Elon Musk (who provided early funding for the project) and physicist Stephen Hawking. Despite mastering more than half the classic Atari 2600 games, the program – deep Q-network (DQN), developed by DeepMind Technologies – struggled with more difficult challenges, such as, well, Pac-Man. "On the face of it, it looks trivial in the sense that these are games from the '80s and you can write solutions to them quite easily," said Dr Demis Hassabis, the vice-president of engineering at DeepMind, a British company acquired by a year ago for a reported £400m (US$650m). Never before has a computer taught itself how to do a range of complex operations, said Dr Hassabis, one of the company's co-founders.