Goto

Collaborating Authors

Beyond Asimov: how to plan for ethical robots

#artificialintelligence

As robots become integrated into society more widely, we need to be sure they'll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov's Three Laws of Robotics: Today, more than 70 years after Asimov's first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov's Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?


Creating robots capable of moral reasoning is like parenting – Regina Rini Aeon Essays

#artificialintelligence

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what's moral for a human? Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there's another problem, one that really ought to come first. It's the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I'd argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature? We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong.


THE TECHNOLOGICAL CITIZEN » "Moral Machines" By Wendell Wallach and Collin Allen

#artificialintelligence

In the 2004 film I, Robot, Will Smith's character Detective Spooner harbors a deep grudge for all things technological -- and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators. Why was Detective Spooner such a Luddite–even before the Robots' vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl's. The robot's decision haunts Smith's character throughout the movie; he feels the decision lacked emotion, and what one might call'humanity'. "I was the logical choice," he says. "(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance." He continues, dramatically, "But that was somebody's baby. A human being would've known that."


Can Artificial Intelligence Be Held Morally Responsible? - Evangelical Perspective

#artificialintelligence

The recent acquisition of MobileEye by Intel for approximately $15B tells us the that the age of AI is here to stay. The field is now a viable investment and development opportunity. But what is one to do when the artificially-intelligent device, perhaps a vehicle -- the one that is headed toward you versus a crowd of pedestrians -- has to make a choice? What are the options and what is the desired end of any decision that the device should make? Discussions of artificial intelligence and ethics will be ongoing for the next several decades.


How A.I. is Revolutionizing Content Marketing

#artificialintelligence

In a setting befitting the opening scene of a sci-fi thriller at the recently opened Leverhulme Center for the Future of Intelligence at Cambridge University, Professor Stephen Hawking cautions that the future of artificial intelligence could potentially be "either the best, or worst thing to ever happen to humanity." Reiterating his 2014 statement to the BBC, Hawking urges that if done wrong, "the development of full AI could spell the end of the human race" –– think Terminator, I, Robot, or WestWorld. But on the other hand, Hawking believes that amplifying our minds through utilization of artificial intelligence can transform every aspect of our lives. The overarching concern surrounding AI is machine morality and whether it is safe for society. If people do not have proper ethical guidelines or fully comprehend the risks AI could play on mankind, is the expansion of functionalities and the powering of complex self-evolving capabilities – as Hawking would put it – the'worst thing to happen to humanity?'