Podcast: Six Experts Explain the Killer Robots Debate - Future of Life Institute

#artificialintelligence

Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it's complicated. In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39). You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher. If you work with ...


Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.


Research Principles

#artificialintelligence

In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there's a huge loss of life today.


Towards Moral Autonomous Systems

arXiv.org Artificial Intelligence

Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.


What are the dangers of artificial intelligence in our brave new world of self-driving cars?

#artificialintelligence

Like paper, print, steel and the wheel, computer-generated artificial intelligence is a revolutionary technology that can bend how we work, play and love. It is already doing so in ways we can and cannot perceive. As Facebook, Apple and Google pour billions into A.I. development, there is a fledgling branch of academic ethical study--influenced by Catholic social teaching and encompassing thinkers like the Jesuit scientist Pierre Teilhard de Chardin--that aims to study its moral consequences, contain the harm it might do and push tech firms to integrate social goods like privacy and fairness into their business plans. "There are a lot of people suddenly interested in A.I. ethics because they realize they're playing with fire," says Brian Green, an A.I. ethicist at Santa Clara University. "And this is the biggest thing since fire." "There are a lot of people suddenly interested in A.I. ethics because they realize they're playing with fire. And this is the biggest thing since fire."