Goto

Collaborating Authors

Google DeepMind researches why robots kill or cooperate

#artificialintelligence

New research from DeepMind, Alphabet Inc.'s London-based artificial intelligence unit could ultimately shed light on this fundamental question. They have been investigating the conditions in which reward-optimizing beings, whether human or robot, would chose to cooperate, rather than compete. The answer could have implications for how computer intelligence may eventually be deployed to manage complex systems such as an economy, city traffic flows, or environmental policy. Joel Leibo, the lead author of a paper DeepMind published online Thursday, said in an email that his team's research indicates that whether agents learn to cooperate or compete depends strongly on the environment in which they operate. While the research has no immediate real-world application, it would help DeepMind design artificial intelligence agents that can work together in environments with imperfect information.


Google's AI Learns Betrayal and "Aggressive" Actions Pay Off

#artificialintelligence

As the development of artificial intelligence continues at breakneck speed, questions about whether we understand what we are getting ourselves into persist. One fear is that increasingly intelligent robots will take all our jobs. Another fear is that we will create a world where a superintelligence will one day decide that it has no need for humans. This fear is well-explored in popular culture, through books and films like the Terminator series. Another possibility is maybe the one that makes the most sense - since humans are the ones creating them, the machines and machine intelligences are likely to behave just like humans.


Google's new AI has learned to become 'highly aggressive' in stressful situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behavior tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go playersat their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.


Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations

#artificialintelligence

Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be "the best, or the worst thing, ever to happen to humanity". We've all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google's new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future. In tests late last year, Google's DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world's best Go players at their own game. It's since been figuring out how to seamlessly mimic a human voice. Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it's about to lose, it opts for "highly aggressive" strategies to ensure that it comes out on top.