Goto

Collaborating Authors

Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity. And behaviour tests conducted on Google's DeepMind AI system make it clear just how careful we need to be when building the robots of the future. In tests in 2016, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. Then it started figuring out how to seamlessly mimic a human voice. More recently in 2017, researchers tested its willingness to cooperate with others, and revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's new AI has learned to become 'highly aggressive' in stressful situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behavior tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go playersat their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's AI Learns Betrayal and "Aggressive" Actions Pay Off

#artificialintelligence

As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series. Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans.


Google's Artificial Intelligence Becoming 'Human-Like' With Aggressive, Greedy Behavior We Are Change

#artificialintelligence

Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google's DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed. The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning.