You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.
Elon Musk has been sounding the alarms about the existential risk of the human species due to Artificial Intelligence. He's a brilliant leader and has one of the sharpest minds in the public eye, so for him to be alarmed about a field I work in and sense no such existential risk has caused considerable cognitive dissonance. My instinctive disagreement with someone of his stature, along with people like Nick Bostrom, Stephen Hawkins, and Bill Gates made me curious. Nick Bostrom wrote the best seller Super Intelligence that describes a world where machine intelligence dominates human intelligence and ends in our extinction. As bleak as this world view is, Bostrom approaches it with reason and with loose timelines.
Thousands of the top names in tech have come together to take a stand against the development of killer robots. Tesla and SpaceX CEO Elon Musk, who has long been outspoken about the dangers of AI, joined the founders of Google DeepMind, the XPrize Foundation, and over 2,500 individuals and companies in signing a pledge this week condemning lethal autonomous weapons. The industry experts vowed to'neither participate in nor support' the use of such weapons, arguing that'the decision to take a human life should never be delegated to a machine.'
Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity -- or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook's Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic. As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I've seen how beneficial it can be. I've developed AI software that lets robots working in teams make individual decisions, as part of collective efforts to explore and solve problems.
Over a hundred experts in robotics and artificial intelligence are calling on the UN to ban the development and use of killer robots and add them to a list of'morally wrong' weapons including blinding lasers and chemical weapons. Google's Mustafa Suleyman and Tesla's Elon Musk are among the most prominent names on a list of 116 tech experts who have signed an open letter asking the UN to ban autonomous weapons in a bid to prevent an arms race. In December 2016 the UN voted to begin formal talks over the future of such weapons, including tanks, drones and automated machine guns. So far, 19 out of 123 member states have called for an outright ban on lethal autonomous weapons. One of the letter's key organisers, Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Australia unveiled the letter at the opening of the International Joint Conference on Artificial Intelligence in Melbourne.