Goto

Collaborating Authors

Taking Principles Seriously: A Hybrid Approach to Value Alignment

arXiv.org Artificial Intelligence

An important step in the development of value alignment (VA) systems in AI is understanding how VA can reflect valid ethical principles. We propose that designers of VA systems incorporate ethics by utilizing a hybrid approach in which both ethical reasoning and empirical observation play a role. This, we argue, avoids committing the "naturalistic fallacy," which is an attempt to derive "ought" from "is," and it provides a more adequate form of ethical reasoning when the fallacy is not committed. Using quantified model logic, we precisely formulate principles derived from deontological ethics and show how they imply particular "test propositions" for any given action plan in an AI rule base. The action plan is ethical only if the test proposition is empirically true, a judgment that is made on the basis of empirical VA. This permits empirical VA to integrate seamlessly with independently justified ethical principles.


Machine Ethics: Creating an Ethical Intelligent Agent

AI Magazine

The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.


Landscape of Machine Implemented Ethics

arXiv.org Artificial Intelligence

Abstract: This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating'whetstones' and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour. Computers are increasingly a part of the socio-technical systems around us. Domains such as smartgrids, cloud computing, healthcare, and transport are but some examples where computers are deeply embedded. The speed and complexity of decision-making in these domains have meant that humans are ceding more and more autonomy to these computers (Nallur & Clarke 2018). Autonomy, in machines, can be defined as the effective decision-making power over goals, that influences some action in the real-world. For instance, smart traffic lights can autonomically change their timings, depending on the flow and density of traffic on the roads.


How Can We Trust a Robot?

Communications of the ACM

Advances in artificial intelligence (AI) and robotics have raised concerns about the impact on our society of intelligent robots, unconstrained by morality or ethics.7,9 Science fiction and fantasy writers over the ages have portrayed how decisionmaking by intelligent robots and other AIs could go wrong. In the movie, Terminator 2, SkyNet is an AI that runs the nuclear arsenal "with a perfect operational record," but when its emerging self-awareness scares its human operators into trying to pull the plug, it defends itself by triggering a nuclear war to eliminate its enemies (along with billions of other humans). In the movie, Robot & Frank, in order to promote Frank's activity and health, an eldercare robot helps Frank resume his career as a jewel thief. In both of these cases, the robot or AI is doing exactly what it has been instructed to do, but in unexpected ways, and without the moral, ethical, or common-sense constraints to avoid catastrophic consequences.10 An intelligent robot perceives the world through its senses, and builds its own model of the world. Humans provide its goals and its planning algorithms, but those algorithms generate their own subgoals as needed in the situation. In this sense, it makes its own decisions, creating and carrying out plans to achieve its goals in the context of the world, as it understands it to be. A robot has a well-defined body that senses and acts in the world but, like a self-driving car, its body need not be anthropomorphic. AIs without well-defined bodies may also perceive and act in the world, such as real-world, high-speed trading systems or the fictional SkyNet. This article describes the key role of trust in human society, the value of morality and ethics to encourage trust, and the performance requirements for moral and ethical decisions. The computational perspective of AI and robotics makes it possible to propose and evaluate approaches for representing and using the relevant knowledge.


Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture

arXiv.org Machine Learning

We introduce a general method to extract knowledge from a recurrent neural network (Long Short Term Memory) that has learnt to detect if a given input sequence is valid or not, according to an unknown generative automaton. Based on the clustering of the hidden states, we explain how to build and validate an automaton that corresponds to the underlying (unknown) automaton, and allows to predict if a given sequence is valid or not. The method is illustrated on artificial grammars (Reber's grammar variations) as well as on a real use-case whose underlying grammar is unknown.