Asimov's " Three Laws of Robotics " and Machine Metaethics

AAAI Conferences

Using Asimov's "Bicentennial Man" as a springboard, a number of metaethical issues concerning the emerging field of Machine Ethics are discussed. Although the ultimate goal of Machine Ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov's "Three Laws of Robotics" are an unsatisfactory basis for Machine Ethics, regardless of the status of the machine.


Stephen Hawking fears AI could destroy humankind. Should you worry?

AITopics Original Links

Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, "the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably "our biggest existential threat." Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk -- and to do something about it while there is still time. Hawking made his most recent comments at the beginning of December, in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.


Will Artificial Intelligence become a threat to humanity?

#artificialintelligence

By Luis Fierro Carrión (*) In March 2016, Google's AlphaGo artificial intelligence system beat Korean master Lee Sedol in the game "Go", an ancient Chinese table game. The possible moves in this game have a level of complexity much greater than those of chess. Google developed an algorithm for AlphaGo to learn recursively each time it played, through a deep neural network. AlphaGo learned to discover new strategies by itself, by playing thousands of games within its neural networks, and adjusting the connections through a process of trial and error known as "learning by reinforcement". Artificial intelligence (AI) systems have been conquering more and more complex games: tic-tac-toe in 1952, checkers in 1994, chess in 1997, "Jeopardy" (a game of questions on different subjects) in 2011; and in 2014, Google's algorithms learned how to play 49 Atari video games simply by studying the inputs in the screen pixels and the scores obtained.


10 best books on artificial intelligence

#artificialintelligence

Artificial intelligence has been the stuff of mad dreams, and sometimes nightmares, throughout our collective history. We've come a long way from a 15th-century automaton knight crafted by Leonardo da Vinci. Within the past century, artificial intelligence has inched itself further into our realities and day to day lives and there is now no doubt we're entering into a new age of intelligence. Early computing technology ushered in a new branch of computer science dealing with the simulated intelligence of machines. In recent history, we've used A.I. for common tasks, such as playing against the computer in chess matches and other gameplay behaviors.


Beyond Asimov: how to plan for ethical robots

#artificialintelligence

As robots become integrated into society more widely, we need to be sure they'll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov's Three Laws of Robotics: Today, more than 70 years after Asimov's first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov's Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?