moral obligation
Integrating Reason-Based Moral Decision-Making in the Reinforcement Learning Architecture
Reinforcement Learning is a machine learning methodology that has demonstrated strong performance across a variety of tasks. In particular, it plays a central role in the development of artificial autonomous agents. As these agents become increasingly capable, market readiness is rapidly approaching, which means those agents, for example taking the form of humanoid robots or autonomous cars, are poised to transition from laboratory prototypes to autonomous operation in real-world environments. This transition raises concerns leading to specific requirements for these systems - among them, the requirement that they are designed to behave ethically. Crucially, research directed toward building agents that fulfill the requirement to behave ethically - referred to as artificial moral agents(AMAs) - has to address a range of challenges at the intersection of computer science and philosophy. This study explores the development of reason-based artificial moral agents (RBAMAs). RBAMAs are build on an extension of the reinforcement learning architecture to enable moral decision-making based on sound normative reasoning, which is achieved by equipping the agent with the capacity to learn a reason-theory - a theory which enables it to process morally relevant propositions to derive moral obligations - through case-based feedback. They are designed such that they adapt their behavior to ensure conformance to these obligations while they pursue their designated tasks. These features contribute to the moral justifiability of the their actions, their moral robustness, and their moral trustworthiness, which proposes the extended architecture as a concrete and deployable framework for the development of AMAs that fulfills key ethical desiderata. This study presents a first implementation of an RBAMA and demonstrates the potential of RBAMAs in initial experiments.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > France (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Overview (0.92)
- Transportation > Ground > Road (0.47)
- Information Technology > Robotics & Automation (0.33)
Acting for the Right Reasons: Creating Reason-Sensitive Artificial Moral Agents
Baum, Kevin, Dargasz, Lisa, Jahn, Felix, Gros, Timo P., Wolf, Verena
We propose an extension of the reinforcement learning architecture that enables moral decision-making of reinforcement learning agents based on normative reasons. Central to this approach is a reason-based shield generator yielding a moral shield that binds the agent to actions that conform with recognized normative reasons so that our overall architecture restricts the agent to actions that are (internally) morally justified. In addition, we describe an algorithm that allows to iteratively improve the reason-based shield generator through case-based feedback from a moral judge.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Poland > Masovia Province > Warsaw (0.04)
- Europe > Germany > Saarland > Saarbrücken (0.04)
Trust from Ethical Point of View: Exploring Dynamics Through Multiagent-Driven Cognitive Modeling
The paper begins by exploring the rationality of ethical trust as a foundational concept. This involves distinguishing between trust and trustworthiness and delving into scenarios where trust is both rational and moral. It lays the groundwork for understanding the complexities of trust dynamics in decision-making scenarios. Following this theoretical groundwork, we introduce an agent-based simulation framework that investigates these dynamics of ethical trust, specifically in the context of a disaster response scenario. These agents, utilizing emotional models like Plutchik's Wheel of Emotions and memory learning mechanisms, are tasked with allocating limited resources in disaster-affected areas. The model, which embodies the principles discussed in the first section, integrates cognitive load management, Big Five personality traits, and structured interactions within networked or hierarchical settings. It also includes feedback loops and simulates external events to evaluate their impact on the formation and evolution of trust among agents. Through our simulations, we demonstrate the intricate interplay of cognitive, emotional, and social factors in ethical decision-making. These insights shed light on the behaviors and resilience of trust networks in crisis situations, emphasizing the role of rational and moral considerations in the development of trust among autonomous agents. This study contributes to the field by offering an understanding of trust dynamics in socio-technical systems and by providing a robust, adaptable framework capable of addressing ethical dilemmas in disaster response and beyond. The implementation of the algorithms presented in this paper is available at this GitHub repository: \url{https://github.com/abbas-tari/ethical-trust-cognitive-modeling}.
- Europe > Norway > Eastern Norway > Oslo (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Unravelling Responsibility for AI
Porter, Zoe, Al-Qaddoumi, Joanna, Conmy, Philippa Ryan, Morgan, Phillip, McDermid, John, Habli, Ibrahim
To reason about where responsibility does and should lie in complex situations involving AI-enabled systems, we first need a sufficiently clear and detailed cross-disciplinary vocabulary for talking about responsibility. Responsibility is a triadic relation involving an actor, an occurrence, and a way of being responsible. As part of a conscious effort towards 'unravelling' the concept of responsibility to support practical reasoning about responsibility for AI, this paper takes the three-part formulation, 'Actor A is responsible for Occurrence O' and identifies valid combinations of subcategories of A, is responsible for, and O. These valid combinations - which we term "responsibility strings" - are grouped into four senses of responsibility: role-responsibility; causal responsibility; legal liability-responsibility; and moral responsibility. They are illustrated with two running examples, one involving a healthcare AI-based system and another the fatal collision of an AV with a pedestrian in Tempe, Arizona in 2018. The output of the paper is 81 responsibility strings. The aim is that these strings provide the vocabulary for people across disciplines to be clear and specific about the different ways that different actors are responsible for different occurrences within a complex event for which responsibility is sought, allowing for precise and targeted interdisciplinary normative deliberations.
- North America > United States > Arizona > Maricopa County > Tempe (0.24)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (11 more...)
- Law > Torts Law (1.00)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (4 more...)
Einride founder Robert Falck on his moral obligation to electrify autonomous trucking – TechCrunch
Robert Falck used to work at a Russian trucking factory by day, and by night, he built a nightclub guest list startup. He also collects old books, and once guessed that Chinese author Gao Xingjian would win the Nobel Prize in literature. He grew up on a farm, but has degrees in finance, economics and mechanical engineering. No, this isn't a game of two truths and a lie -- indeed, these are snippets from the life of a serial entrepreneur who harbors a vendetta against the carbon emissions produced by the world's trucking industry. Falck, now the CEO and founder of Swedish autonomous freight company Einride, also worked as the director of manufacturing engineering assembly at Volvo GTO Powertrain.
- Transportation > Ground > Road (1.00)
- Transportation > Freight & Logistics Services (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
How Can We Be Responsible For the Future of AI?
Are we responsible for the future? In some very basic sense of responsibility we are: what we do now will have a causal effect on things that happen later. However, such causal responsibility is not always enough to establish whether or not we have certain obligations towards the future. Be that as it may, there are still instances where we do have such obligations. For example, our failure to adequately address the causes of climate change (us) will ultimately lead to future generations having to suffer.
Why creating AI that has free will would be a huge mistake Joanna Bryson
AI expert Joanna Bryson posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already proven that it can pick up negative human characteristics if those characteristics are in the data. Therefore, it's not crazy at all to think that AI could scan every YouTube comment in one afternoon and pick up all the negativity we've unloaded there. If it's already proven it's not only capable of making the wrong decision but eventually will make the wrong decision when it comes to data mining and implementation, why even give it the same powers as us in the first place? Joanna Bryson: First of all there's the whole question about why is it that we in the first place assume that we have obligations towards robots? So we think that if something is intelligent, then that's their special source, that's why we have moral obligations. And why do we think that?
On Keeping Secrets: Intelligent Agents and the Ethics of Information Hiding
Hunter, Aaron (British Columbia Institute of Technology)
Communication involves transferring information from one agent to another. An intelligent agent, either human or machine, is often able to choose to hide information in order to protect their interests. The notion of information hiding is closely linked to secrecy and dishonesty, but it also plays an important role in domains such as software engineering. In this paper, we consider the ethics of information hiding, particularly with respect to intelligent agents. In other words, we are concerned with situations that involve a human and an intelligent agent with access to different information. Is the intelligent agent justified in preventing a human user from accessing the information that they possess? This is trivially true in the case where access control systems exist. However, we are concerned with the situation where an intelligent agent is able to using a reasoning system to decide not to share information with all humans. On the other hand, we are also concerned with situations where humans hide information from machines. Are we ever under a moral obligation to share information with a computional agent? We argue that questions of this form are increasingly important now, as people are increasingly willing to divulge private information to machines with a great capacity to reason with that information and share it with others.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Burnaby (0.04)