Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what's moral for a human? Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there's another problem, one that really ought to come first. It's the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I'd argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature? We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong.
Experts at the University of Oslo, Norway have discovered a new way for robots to design, evolve and manufacture themselves, without input from humans, using a form of artificial evolution called "Generative design," and 3D printers – although admittedly the team, for now at least, still has to assemble the final product, robot, when it's printed. Generative design is something we've talked about several times before and it's where artificial intelligence programs – creative machines, if you will – not humans, innovate new products – such as chairs and even Under Armour's new Architech sneakers. The labs latest robot, "Number Four," which is made up of sausage like plastic parts linked together with servo motors, is trying out different gaits, attempting to figure out the best way to move from one end of the floor to the other. And while you might look at this video and think it's weird, or funny remember that this is just the start. Today it's evolving, trying to learn how to move from A to B in the most efficient manner, but tomorrow – well, it could be "evolving" anything, and all at a much faster rate than humans.
The question of whether AI can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies are constantly constructing our artefacts, including our ethical systems. Consequently, the place of AI in society requires normative, not descriptive reasoning. Here I review the basis of social and ethical behaviour, then propose a definition of morality that facilitates the consideration of AI moral subjectivity. I argue that we are unlikely to construct a coherent ethics such that it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.
There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society. Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina. Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier?
Human beings begin to learn the difference before we learn to speak--and thankfully so. We owe much of our success as a species to our capacity for moral reasoning. It's the glue that holds human social groups together, the key to our fraught but effective ability to cooperate. We are (most believe) the lone moral agents on planet Earth--but this may not last. The day may come soon when we are forced to share this status with a new kind of being, one whose intelligence is of our own design. Robots are coming, that much is sure. They are coming to our streets as self-driving cars, to our military as automated drones, to our homes as elder-care robots--and that's just to name a few on the horizon (Ten million households already enjoy cleaner floors thanks to a relatively dumb little robot called the Roomba). What we don't know is how smart they will eventually become.