Why We Must Not Build Automated Weapons of War

#artificialintelligence

Over 100 CEOs of artificial intelligence and robotics firms recently signed an open letter warning that their work could be repurposed to build lethal autonomous weapons -- "killer robots." They argued that to build such weapons would be to open a "Pandora's Box." This could forever alter war. Over 30 countries have or are developing armed drones, and with each successive generation, drones have more autonomy. Automation has long been used in weapons to help identify targets and maneuver missiles.


[FoR&AI] The Seven Deadly Sins of Predicting the Future of AI – Rodney Brooks

#artificialintelligence

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs. As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours. Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little "the sky is falling" calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better. Today, there is a story in Market Watch that robots will take half of today's jobs in 10 to 20 years. It even has a graphic to prove the numbers. How many robots are currently operational in those jobs? How many realistic demonstrations have there been of robots working in this arena? Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site. Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them? Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future. I am going to first list the four such general topic areas of such predictions that I notice, along with a brief assessment of where I think they currently stand. Research on AGI is an attempt to distinguish a thinking entity from current day AI technology such as Machine Learning. Here the idea is that we will build autonomous agents that operate much like beings in the world.


Podcast: Law and Ethics of Artificial Intelligence - Future of Life Institute

#artificialintelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences group at California Polytechnic State, where he studies the ethics of technology. In this podcast, we discuss accountability and transparency with autonomous systems, government regulation vs. self-regulation, fake news, and the future of autonomous systems.


Artificial intelligence could 'evolve faster than the human race'

#artificialintelligence

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley, according to Professor Stephen Hawking. Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold, and it could one day spell the end for mankind. The world-renowned professor has warned robots could evolve faster than humans and their goals will be unpredictable. Professor Stephen Hawking (pictured) claimed AI would be difficult to stop if the appropriate safeguards are not in place. During a talk in Cannes, Google's chairman Eric Schmidt said AI will be developed for the benefit of humanity and there will be systems in place in case anything goes awry.