Using Asimov's "Bicentennial Man" as a springboard, a number of metaethical issues concerning the emerging field of Machine Ethics are discussed. Although the ultimate goal of Machine Ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov's "Three Laws of Robotics" are an unsatisfactory basis for Machine Ethics, regardless of the status of the machine.
By Luis Fierro Carrión (*) In March 2016, Google's AlphaGo artificial intelligence system beat Korean master Lee Sedol in the game "Go", an ancient Chinese table game. The possible moves in this game have a level of complexity much greater than those of chess. Google developed an algorithm for AlphaGo to learn recursively each time it played, through a deep neural network. AlphaGo learned to discover new strategies by itself, by playing thousands of games within its neural networks, and adjusting the connections through a process of trial and error known as "learning by reinforcement". Artificial intelligence (AI) systems have been conquering more and more complex games: tic-tac-toe in 1952, checkers in 1994, chess in 1997, "Jeopardy" (a game of questions on different subjects) in 2011; and in 2014, Google's algorithms learned how to play 49 Atari video games simply by studying the inputs in the screen pixels and the scores obtained.
Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, "the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably "our biggest existential threat." Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk -- and to do something about it while there is still time. Hawking made his most recent comments at the beginning of December, in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.
Artificial intelligence has been the stuff of mad dreams, and sometimes nightmares, throughout our collective history. We've come a long way from a 15th-century automaton knight crafted by Leonardo da Vinci. Within the past century, artificial intelligence has inched itself further into our realities and day to day lives and there is now no doubt we're entering into a new age of intelligence. Early computing technology ushered in a new branch of computer science dealing with the simulated intelligence of machines. In recent history, we've used A.I. for common tasks, such as playing against the computer in chess matches and other gameplay behaviors.
You submitted your questions about artificial intelligence and robotics, and we put them – and some of our own – to The Conversation's experts. It is 100% plausible that we'll have human-like artificial intelligence. I say this even though the human brain is the most complex system in the universe that we know of. But there are also no physical laws we know of that would prevent us reproducing or exceeding its capabilities. Popular AI from Issac Asimov to Steven Spielberg is plausible. What the question doesn't address is: when will it be plausible? Most AI researchers (including me) see little or no evidence of it coming anytime soon. Progress on the major AI challenges is slow, if real.