Can Machines Become Moral?

#artificialintelligence 

The question is heard more and more often, both from those who think that machines cannot become moral, and who think that to believe otherwise is a dangerous illusion, and from those who think that machines must become moral, given their ever-deeper integration into human society. In fact, the question is a hard one to answer, because, as typically posed, it is beset by many confusions and ambiguities. Only by sorting out some of the different ways in which the question is asked, as well as the motivations behind the question, can we hope to find an answer, or at least decide what an adequate answer might look like. For some, the question is whether artificial agents, especially humanoid robots, like Commander Data in Star Trek: The Next Generation, will someday become sophisticated enough and enough like humans in morally relevant ways so as to be accorded equal moral standing with humans. This would include holding the robot morally responsible for its actions and according it the full array of rights that we confer upon humans.