Goto

Collaborating Authors

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google DeepMind releases a 3-D world to nurture smarter AI agents

#artificialintelligence

Google DeepMind, a subsidiary of Alphabet that's focused on making fundamental progress toward general artificial intelligence, is releasing a new 3-D virtual world today, making it available for other researchers to experiment with and modify however they wish. The new platform, called DeepMind Lab, resembles a blockish 3-D first-person shooter computer game. Inside the world, an AI agent takes the form of a floating orb that can perceive its surroundings, move around, and perform simple actions. Agents can be trained to perform various tasks through a form of machine learning that involves receiving positive rewards. Simple example tasks that will come bundled with the platform include navigating a maze, collecting fruit, and traversing narrow passages without falling off.


Google's AI has learned to become aggressive

#artificialintelligence

In 2015, according to Business Insider, Google engineers were programming "an advanced kind of chatbot." These earlier Artificial Intelligence (AI) machines were learning how to respond to questions after given input containing specific types of dialogue. The engineers were pleased to discover their AI machines were gaining proficiency in "forming new answers to new questions." And although some AI responses were creative, they were tinged with malevolence. It wasn't reported if the machine had been prepped with disinformation about the myth of man-made climate change, but the AI response about the immorality of childbirth would certainly be championed by extreme-green environmentalist groups.


Google Just Found the One Question It Can't Yet Answer

#artificialintelligence

When our robot overlords arrive, will they decide to kill us or cooperate with us? New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit, could ultimately shed light on this fundamental question. They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would choose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy. Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an e-mail that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate.