When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question. They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy. Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.
Starcraft II has been a target for Alphabet's DeepMind AI research for a while now – the UK AI company took on Blizzard's sci-fi strategy game starting last year, and announced plans to create an open AI research environment based on the game to make it possible for others to contribute to the effort of creating a virtual agent who can best the top human StarCraft players in the world. Now, DeepMind and Blizzard are opening the doors to that environment, with new tools including a machine learning API, a large game replay dataset, an open source DeepMind toolset and more. The new release of the StarCraft II API on the Blizzard side includes a Linux package made to be able to run in the cloud, as well as support for Windows and Mac. It also has support for offline AI vs. AI matches, and those anonymized game replays from actual human players for training up agents, which is starting out at 65,000 complete matches, and will grow to over 500,000 over the course of the next few weeks. StarCraft II is such a useful environment for AI research basically because of how complex and varied the games can be, with multiple open routes to victory for each individual match.
In 2015, according to Business Insider, Google engineers were programming "an advanced kind of chatbot." These earlier Artificial Intelligence (AI) machines were learning how to respond to questions after given input containing specific types of dialogue. The engineers were pleased to discover their AI machines were gaining proficiency in "forming new answers to new questions." And although some AI responses were creative, they were tinged with malevolence. It wasn't reported if the machine had been prepped with disinformation about the myth of man-made climate change, but the AI response about the immorality of childbirth would certainly be championed by extreme-green environmentalist groups.
Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behavior tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go playersat their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.
But the newest artificial intelligence system from Google's DeepMind division does indeed dream, metaphorically at least, about finding apples in a maze. Researchers at DeepMind wrote in a paper published online Thursday that they had achieved a leap in the speed and performance of a machine learning system. It was accomplished by, among other things, imbuing technology with attributes that function in a way similar to how animals are thought to dream. The paper explains how DeepMind's new system -- named Unsupervised Reinforcement and Auxiliary Learning agent, or Unreal -- learned to master a three-dimensional maze game called Labyrinth 10 times faster than the existing best AI software. It can now play the game at 87 per cent the performance of expert human players, the DeepMind researchers said.