Collaborating Authors

10 policy principles needed for artificial intelligence


Artificial intelligence is an area of innovation where regulation is necessary but can't be allowed to curtail innovation.

How Do We Align Artificial Intelligence with Human Values? - Future of Life Institute


A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Recently, some of the top minds in AI and related fields got together to discuss how we can ensure AI remains beneficial throughout this transition, and the result was the Asilomar AI Principles document. The intent of these 23 principles is to offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, "Of course, it's just a start.

Truly Autonomous Machines Are Ethical Artificial Intelligence

John Hooker Carnegie Mellon University Revised December 2018 Abstract While many see the prospect of autonomous machines as threatening, autonomy may be exactly what we want in a superintelligent machine. There is a sense of autonomy, deeply rooted in the ethical literature, in which an autonomous machine is necessarily an ethical one. Development of the theory underlying this idea not only reveals the advantages of autonomy, but it sheds light on a number of issues in the ethics of artificial intelligence. It helps us to understand what sort of obligations we owe to machines, and what obligations they owe to us. It clears up the issue of assigning responsibility to machines or their creators. More generally, a concept of autonomy that is adequate to both human and artificial intelligence can lead to a more adequate ethical theory for both. There is a good deal of trepidation at the prospect of autonomous machines. They may wreak havoc and even turn on their creators. We fear losing control of machines that have minds of their own, particularly if they are intelligent enough to outwit us. There is talk of a "singularity" in technological development, at which point machines will start designing themselves and create superintelligence (Vinge 1993, Bostrom 2014). Do we want such machines to be autonomous? There is a sense of autonomy, deeply rooted in the ethics literature, in which this may be exactly what we want. The attraction of an autonomous machine, in this sense, is that it is an ethical machine. The aim of this paper is to explain why this is so, and to show that the associated theory can shed light on a number of issues in the ethics of artificial intelligence (AI).

Essential Principles for Autonomous Robotics

Morgan & Claypool Publishers

This book is a snapshot of motivations and methodologies for our collective attempts to transform our lives and enable us to cohabit with robots that work with and for us. It reviews and guides the reader to seminal and continual developments that are the foundations for successful paradigms. It attempts to demystify the abilities and limitations of robots. It is a progress report on the continuing work that will fuel future endeavors. ISBN 9781627050586, 155 pages.