AI developers promise they won't automate murder, with one notable exception

#artificialintelligence

Thousands of artificial intelligence developers and researchers -- including Elon Musk, Google DeepMind co-founder Demis Hassabis, and Google Machine Intelligence head Jeffrey Dean -- just signed a "Lethal Autonomous Weapons Pledge," vowing to resist delegating the decision to murder in a military context to a machine. On its face, this pledge seems like a step in the right direction, a recognition of the concerns of tech employees. But here's the main problem with this pledge: the top drone manufacturers for the U.S. military -- including but not limited to Northrop Grumman, Boeing, General Atomics, and Textron, which make up 66 percent of the U.S. drone military market -- did not sign on to the contract. "We will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons," the pledge reads. "We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge."


I Quit My Job to Protest My Company's Work on Building Killer Robots

#artificialintelligence

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place. We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously. I never could have predicted that two years later, I would have to quit this job on moral grounds.


I Quit My Job To Protest My Company's Work On Building Killer Robots

#artificialintelligence

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place. We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously. I never could have predicted that two years later, I would have to quit this job on moral grounds.


Scientists call for ban on killer robots in Geneva today

Daily Mail - Science & tech

AI experts have put together a seven-minute film that depicts a terrifying future where tiny killer drones are programmed to carry out mass killings. Made by an advocacy group called Campaign to Stop Killer Robots, the footage shows palm-sized drones armed with explosives finding and attacking people without human supervision. These tiny drones can kill with ruthless efficiency and campaigners warn a preemptive ban on the technology is needed to stop a new era of horrific mass destruction. In the film, machines can can spot activists in lecture halls and kill them by propelling an explosive into their head. The video starts with a developer introducing the new technology, saying these drones can react 100 times faster than a human.


Tech leaders warn against robotic weapons

Daily Mail - Science & tech

Killer robots should be urgently banned before a wave of weapons of mass destruction gets out of control, industry leaders say. Robotics and artificial intelligence experts have signed of an open letter demanding the UN prohibit the use of such weapons internationally. The weapons, including lethal microdrone swarms, are on the edge of development with the potential to create global instability, they warn. Killer robots should be urgently banned before a wave of weapons of mass destruction gets out of control, industry leaders say. In June the Pentagon awarded an $11 million (£8.4 million) contract to build a'combined-arms squad' of human and robotic capabilities.