DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse

#artificialintelligence

You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.


Who's afraid of artificial intelligence?

#artificialintelligence

Artificial intelligence is one of the those wonky topics that tech geeks salivate over but spend little time dissecting. Its usually referenced as a simple programming tool called deep learning, which trains robots in a given task by introducing voluminous amounts of data, or as a scary, existential threat to mankind. In our second episode of Season 3, "Who's Afraid of AI?" we explore this technology that's affecting nearly every aspect of the auto industry and beyond. Host Shiraz Ahmed interviews Maya Pindeus, the 27-year-old CEO of Humanizing Autonomy, an AI startup focused on human-machine interactions with autonomous cars. Pindeus met a Daimler executive at Ars Electronica, a conference that focuses on the nexus of arts and technology, in Austria, and later began collobarating on a project.


AI 'scares the hell out of me': Elon Musk outlines greatest fears at SXSW

#artificialintelligence

Elon Musk peddled his AI skepticism and fears at South by Southwest Sunday saying, "I'm very close to the cutting edge in AI and it scares the hell out of me," reports ZDNet. As for the degree of threat AI presents, "mark my words: AI is far more dangerous than nukes," Musk said, reports Business Insider. The rapid rate of AI innovation is spurring the advancement of new technologies like self-driving automobiles, which Musk predicts will be 100-200% safer than cars by the end of 2019. However, the quick pace of AI advancement needs to be regulated by ensuring "that the advent of digital super intelligence is one which is symbiotic with humanity." It's "the single biggest existential crisis that we face," reports ZDNet.


Why many important minds have subscribed to the existential risk of AI

#artificialintelligence

Elon Musk has been sounding the alarms about the existential risk of the human species due to Artificial Intelligence. He's a brilliant leader and has one of the sharpest minds in the public eye, so for him to be alarmed about a field I work in and sense no such existential risk has caused considerable cognitive dissonance. My instinctive disagreement with someone of his stature, along with people like Nick Bostrom, Stephen Hawkins, and Bill Gates made me curious. Nick Bostrom wrote the best seller Super Intelligence that describes a world where machine intelligence dominates human intelligence and ends in our extinction. As bleak as this world view is, Bostrom approaches it with reason and with loose timelines.


Elon Musk is wrong about regulating artificial intelligence

#artificialintelligence

Some people are afraid that heavily armed artificially intelligent robots might take over the world, enslaving humanity -- or perhaps exterminating us. These people, including tech-industry billionaire Elon Musk and eminent physicist Stephen Hawking, say artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook's Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic. As someone who researches how AI works in robotic decision-making, drones and self-driving vehicles, I've seen how beneficial it can be. I've developed AI software that lets robots working in teams make individual decisions, as part of collective efforts to explore and solve problems.