The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins' (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique. The reasons for developing AMAs discussed in van Wynsberghe and Robbins (2018) are: it is inevitable that they will be developed; the prevention of harm; the necessity for public trust; the prevention of immoral use; such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. In this paper, each co-author addresses those reasons in turn. In so doing, this paper demonstrates that the reasons critiqued are not shared by all co-authors; each machine ethicist has their own reasons for researching AMAs. But while we express a diverse range of views on each of the six reasons in van Wynsberghe and Robbins' critique, we nevertheless share the opinion that the scientific study of AMAs has considerable value.
Virtual waiters and waitresses, self-service checkouts and robot orchestra conductors: love it or hate it, automation, artificial intelligence and robotics are here to stay. But will these technological advances - be it in the office or the factory - affect the working life of men and women equally? While there is debate about the benefits of automation in the world of work, there is no escaping the fact that more robots and artificial intelligence means more jobs in science, technology, engineering and maths, known as the Stem group. In the US, home to some of the world's largest technology firms, growth in computing is expected to yield half a million jobs within the next decade. If current gender ratios remain the same until 2020, according to the World Economic Forum's study of more than a dozen advanced economies, for every twenty jobs lost to automation, men working in STEM will see five new jobs and women just one.
The authors behind the Foundation for Responsible Robotics' (FRR) report, published on Wednesday, believe they could herald a "revolution" in sex, helping people who would otherwise find it hard to have intimate relationships. Sharkey said: "Some people say: 'Well, it's better they rape robots than rape real people.' However, Sharkey is sceptical of the argument that robots can help people get over rape or child sex fantasies, suggesting it is more likely to "encourage paedophilia and make it acceptable to assault children". They could affect human interactions in other ways, suggests van Wynsberghe.
Many organisations are looking to artificial intelligence (AI), machine learning, and robotics to reduce operational costs, increase efficiency, grow revenue, boost security and improve the customer experience. However, many businesses don't know where to start. With this in mind, the ITWeb Meeting of Minds: Artificial Intelligence 2018 event is hosting two workshops to be held on 2 August, at The Forum, in Bryanston. The first workshop, 'Responsible robotics; what is it and who cares', will be facilitated by Dr Aimee van Wynsberghe, assistant professor of ethics and technology at TU Delft (Netherlands) and president of the Foundation for Responsible Robotics. As the design and development of robotics persists the need for regulation and policy to temper the risk of negative consequences will increase.