Moral evaluations occur quickly following heuristic-like intuitive processes without effortful deliberation. There are several competing explanations for this. The ADC-model predicts that moral judgment consists in concurrent evaluations of three different intuitive components: the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). Thereby, it explains the intuitive appeal of precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism), and flexible yet stable nature of moral judgment. Insistence on single-component explanations has led to many centuries of debate as to which moral precepts and theories best describe (or should guide) moral evaluation.
It is not clear to what the projects of creating an artificial intelligence(AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics.
Moral reasoning is being modeled for robots, since they are expected to play an increasingly critical role in making judgment calls where human lives are at stake. With robots expected to play an increasingly critical role in making judgment calls where human lives are at stake, it is imperative to model moral reasoning in machines. "Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don't understand on the human side how humans represent and reason if possible with moral norms," notes Tufts University researcher Matthias Scheutz. By gleaning data from enough different situations, Malle believes he will be able to build an approximate map of a human norm network that could be incorporated into a robot so it could summon the correct moral framework for whatever situation is at hand.
The importance of intentions in moral judgments varies across cultures. One theory holds that these differences are driven by variation in cultural norms regarding the acceptability of inferring other peoples' mental states. To test this question, McNamara et al. investigated moral judgments among indigenous iTaukei Fijians whose cultural norms discourage mental-state reasoning. Participants judged cases in which an actor transgressed by accident more harshly than cases in which an actor intended to do harm but failed to do so. When participants were primed to focus on thoughts in a follow-up study, this pattern reversed.
Virtual reality may expose just how evil you truly are. A new study found that people are more willing to sacrifice others for what they imagine to be the greater good when immersed in a virtual world. Participants had to decide whether to push a person off a bridge to block a train from killing five people on the railway line below - and 70% gave a sacrificial response. Virtual reality may expose just how evil you truly are. A new study found that people are more willing to sacrifice others for what they believe to be the greater good when immersed in a virtual world.