We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues--privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)--are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres. Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals.
In recent years, the availability of massive data sets and improved computing power have driven the advent of cutting-edge machine learning algorithms. However, this trend has triggered growing concerns associated with its ethical issues. In response to such a phenomenon, this study proposes a feasible solution that combines ethics and computer science materials in artificial intelligent classrooms. In addition, the paper presents several arguments and evidence in favor of the necessity and effectiveness of this integrated approach.
As we start to encounter AI systems in various morally and legally salient environments, some have begun to explore how the current responsibility ascription practices might be adapted to meet such new technologies [19, 33]. A critical viewpoint today is that autonomous and self-learning AI systems pose a so-called responsibility gap . These systems' autonomy challenges human control over them , while their adaptability leads to unpredictability. Hence, it might infeasible to trace back responsibility to a specific entity if these systems cause any harm. Considering responsibility practices as the adoption of certain attitudes towards an agent , scholarly work has also posed the question of whether AI systems are appropriate subjects of such practices [15, 29, 37] -- e.g., they might "have a body to kick," yet they "have no soul to damn" .
For the first time since 1992, the ACM Code of Ethics and Professional Conduct (the Code) is being updated. The Code Update Task Force in conjunction with the Committee on Professional Ethics is seeking advice from ACM members on the update. We indicated many of the motivations for changing the Code when we shared Draft 1 of Code 2018 with the ACM membership in the December 2016 issue of CACMb and with others through email and the COPE website (ethics.acm.org). Since December, we have been collecting feedback and are vetting proposed changes. We have seen a broad range of concerns about responsible computing including bullying in social media, cyber security, and autonomous machines making ethically significant decisions. The Task Force appreciates the many serious and thoughtful comments it has received. In response, the Task Force has proposed changes that are reflected in Draft 2 of the Code. There are a number of substantial changes that require some explanation. In this article, we discuss these, and we explain why we did not include other requested changes in Draft 2. We look forward to receiving your comments on these suggested changes and your requests for additional changes as we work on Draft 3 of the Code. We have provided opportunities for your comments and an open discussion of Draft 2 at the ACM Code 2018 Discussion website [http://code2018.acm.org/discuss]. Comments can also be contributed at the COPE website https://ethics.acm.org, and by direct emails to firstname.lastname@example.org. ACM members are part of the computing profession and the ACM's Code of Ethics and Professional Conduct should reflect the conscience of the computing profession.
In this paper we discuss approaches to evaluating and validating the ethical claims of a Conversational AI system. We outline considerations around both a top-down regulatory approach and bottom-up processes. We describe the ethical basis for each approach and propose a hybrid which we demonstrate by taking the case of a customer service chatbot as an example. We speculate on the kinds of top-down and bottom-up processes that would need to exist for a hybrid framework to successfully function as both an enabler as well as a shepherd among multiple use-cases and multiple competing AI solutions.