Are you average in every way, or do you sometimes stand out from the crowd? Your answer might have big implications for how you're treated by the algorithms that governments and corporations are deploying to make important decisions affecting your life. "What algorithms?" you might ask. The ones that decide whether you get hired or fired, whether you're targeted for debt recovery and what news you see, for starters. Automated decisions made using statistical processes "will screw [some] people by default, because that's how statistics works," said Dr Julia Powles, an Australian lawyer currently based at New York University's Information Law Institute.
Human beings begin to learn the difference before we learn to speak--and thankfully so. We owe much of our success as a species to our capacity for moral reasoning. It's the glue that holds human social groups together, the key to our fraught but effective ability to cooperate. We are (most believe) the lone moral agents on planet Earth--but this may not last. The day may come soon when we are forced to share this status with a new kind of being, one whose intelligence is of our own design. Robots are coming, that much is sure. They are coming to our streets as self-driving cars, to our military as automated drones, to our homes as elder-care robots--and that's just to name a few on the horizon (Ten million households already enjoy cleaner floors thanks to a relatively dumb little robot called the Roomba). What we don't know is how smart they will eventually become.
The fully programmable Nao robot has been used to experiment with machine ethics. In his 1942 short story'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics -- engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance.
Seventy-five years ago, the celebrated science fiction writer Isaac Asimov published a short story called Runaround. Set on Mercury, it features a sophisticated robot nicknamed Speedy that has been ordered to gather some of the chemical selenium for two human space adventurers. Speedy gets near the selenium, but a toxic gas threatens to destroy the robot. When it retreats from the gas to save itself, the threat recedes and it feels obliged to go back for the selenium. It is left going round in circles.
Public fear will be the biggest hurdle for intelligent robots to overcome. Understanding society's longstanding fear of self-aware automatons should be a consideration within robotics labs, especially those specializing in fully autonomous humanoid robots. Isaac Asimov anticipated this fear and proposed the Three Laws of Robotics as a way to mollify it somewhat. This paper explores the "Frankenstein Complex" and current opinions from noted robotics researchers regarding the possible implementation of Asimov's Laws. It is clear from these unscientific responses why the Three Laws are impractical from a general sense even though the ethical issues involved are at the forefront of researchers' minds. The onus is, therefore, placed on the roboticists of today and the future to hold themselves to a standard similar to the Hippocratic Oath that preserves the spirit of Asimov's Laws.