Goto

Collaborating Authors

Can a Machine Be Taught To Learn Moral Reasoning?

#artificialintelligence

Is it OK to kill time? Machines used to find this question difficult to answer, but a new study reveals that artificial intelligence can be programmed to judge "right" from "wrong". Published in Frontiers in Artificial Intelligence, scientists have used books and news articles to "teach" a machine moral reasoning. Further, by limiting teaching materials to texts from different eras and societies, subtle differences in moral values are revealed. As AI becomes more ingrained in our lives, this research will help machines to make the right choice when confronted with difficult decisions.


How People Judge Good From Bad

#artificialintelligence

Moral evaluations occur quickly following heuristic-like intuitive processes without effortful deliberation. There are several competing explanations for this. The ADC-model predicts that moral judgment consists in concurrent evaluations of three different intuitive components: the character of a person (Agent-component, A); their actions (Deed-component, D); and the consequences brought about in the situation (Consequences-component, C). Thereby, it explains the intuitive appeal of precepts from three dominant moral theories (virtue ethics, deontology, and consequentialism), and flexible yet stable nature of moral judgment. Insistence on single-component explanations has led to many centuries of debate as to which moral precepts and theories best describe (or should guide) moral evaluation.


How to Build a Moral Robot

IEEE Spectrum Robotics

Whether it's in our cars, our hospitals or our homes, we'll soon depend upon robots to make judgement calls in which human lives are at stake. That's why a team of researchers is attempting to model moral reasoning in a robot. In order to pull it off, they'll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices? How can we equip robots with the communication skills to explain their choices in way that we can understand? And would we even want robots to make the same decisions we'd expect humans to make?


Should we create moral machines?

#artificialintelligence

The development of artificial intelligence (AI) that can act autonomously has raised important questions about the nature of machine decision-making and its potential capabilities. Is it possible to implement an ethical dimension to autonomous machines, i.e., is ethics "computable"? Are autonomous machines capable of factoring moral considerations into their decisions? Can AI be programmed to know the difference between "right" and "wrong?" Once the topic of science fiction novels, these questions are leading to discussions about the actual creation of moral machines, also known as artificial moral agents, and have largely contributed to the expansion of the field of machine ethics.


Are We Cut Out for Universal Morality? - Issue 100: Outsiders

Nautilus

Footage of a mob storming the Capitol on Jan. 6, 2021, in an effort to subvert the legal and peaceful transfer of power, filled many of us with horror. Underlying that response was our indignation at the brazen violation of central democratic institutions and values. We respond similarly to news of hate crimes, mass shootings, police brutality, or discriminatory social policies. Such things offend against basic values and principles, such as the inherent worth and equality of persons and their lives, respect for human rights, and the importance of civil society and the rule of law--all of which strike many if not most of us as something more than just matters of taste. In embracing core moral values and taking a moral stance on important issues we see ourselves as at least trying to get something right--something it matters greatly that we do get right, central to how we structure our lives and find meaning in them. That is at least what moral practice is like when viewed "from the inside."