Patiency Is Not a Virtue: AI and the Design of Ethical Systems

AAAI Conferences

The question of whether AI can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies are constantly constructing our artefacts, including our ethical systems. Consequently, the place of AI in society requires normative, not descriptive reasoning. Here I review the basis of social and ethical behaviour, then propose a definition of morality that facilitates the consideration of AI moral subjectivity. I argue that we are unlikely to construct a coherent ethics such that it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.


Robot Citizenship: Why Our Artificial Assistants May One Day Need Passports

#artificialintelligence

You've just received an email: The dream job in Japan is yours. You start making phone calls, looking up the rent on Tokyo apartments, and getting ready to make the career move of a lifetime. There's just one problem: Can your Siri get a visa? It's a potential roadblock that's less farfetched than you'd think. In November 2018, Maltese government minister Silvio Schembri announced an initiative to grapple with questions like how many robots to let into the country at one time and more.


Teaching Morality to Machines

#artificialintelligence

Jane Zavalishina is the CEO of Yandex Data Factory. Vyacheslav Polonski is a PhD student at the University of Oxford and the CEO of Avantgarde Analytics.


How Can We Trust a Robot?

Communications of the ACM

Advances in artificial intelligence (AI) and robotics have raised concerns about the impact on our society of intelligent robots, unconstrained by morality or ethics.7,9 Science fiction and fantasy writers over the ages have portrayed how decisionmaking by intelligent robots and other AIs could go wrong. In the movie, Terminator 2, SkyNet is an AI that runs the nuclear arsenal "with a perfect operational record," but when its emerging self-awareness scares its human operators into trying to pull the plug, it defends itself by triggering a nuclear war to eliminate its enemies (along with billions of other humans). In the movie, Robot & Frank, in order to promote Frank's activity and health, an eldercare robot helps Frank resume his career as a jewel thief. In both of these cases, the robot or AI is doing exactly what it has been instructed to do, but in unexpected ways, and without the moral, ethical, or common-sense constraints to avoid catastrophic consequences.10 An intelligent robot perceives the world through its senses, and builds its own model of the world. Humans provide its goals and its planning algorithms, but those algorithms generate their own subgoals as needed in the situation. In this sense, it makes its own decisions, creating and carrying out plans to achieve its goals in the context of the world, as it understands it to be. A robot has a well-defined body that senses and acts in the world but, like a self-driving car, its body need not be anthropomorphic. AIs without well-defined bodies may also perceive and act in the world, such as real-world, high-speed trading systems or the fictional SkyNet. This article describes the key role of trust in human society, the value of morality and ethics to encourage trust, and the performance requirements for moral and ethical decisions. The computational perspective of AI and robotics makes it possible to propose and evaluate approaches for representing and using the relevant knowledge.


THE TECHNOLOGICAL CITIZEN » "Moral Machines" By Wendell Wallach and Collin Allen

#artificialintelligence

In the 2004 film I, Robot, Will Smith's character Detective Spooner harbors a deep grudge for all things technological -- and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators. Why was Detective Spooner such a Luddite–even before the Robots' vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl's. The robot's decision haunts Smith's character throughout the movie; he feels the decision lacked emotion, and what one might call'humanity'. "I was the logical choice," he says. "(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance." He continues, dramatically, "But that was somebody's baby. A human being would've known that."