National and international organisations have responded to these societal fears by developing ad hoc expert committees on AI, often commissioned with the drafting of policy documents. These include the High-Level Expert Group on Artificial Intelligence appointed by the European Commission, the expert group on AI in Society of the Organisation for Economic Cooperation and Development (OECD), the Advisory Council on the Ethical Use of Artificial Intelligence and Data in Singapore, and the select committee on Artificial Intelligence of the United Kingdom (UK) House of Lords. As part of their institutional appointments, these committees have produced or are reportedly producing reports and guidance documents on AI. Similar efforts are taking place in the private sector, especially among corporations who rely on AI for their business. In 2018 alone, companies such as Google and SAP have publicly released AI guidelines and principles. Declarations and recommendations have also been issued by professional associations and nonprofit organisations such as the Association of Computing Machinery (ACM), Access Now and Amnesty International.
We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues--privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)--are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres. Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals.
Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the "disruptive" potentials of new AI technologies. Designed as a comprehensive evaluation, this paper analyzes and compares these guidelines highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems - and how the effectiveness in the demands of AI ethics can be improved.
In the past 18 months, we have seen a huge rise in the interest of AI development and activation. Countries are developing national strategies, and companies are positioning themselves for the fourth industrial revolution. With this pervasive push of AI, comes also an increased awareness that AIs should act in the interest of a human - and this is not as trivial as one might think. This article provides an overview of several key initiatives that propose ways on approaching AI ethics, regulation and sustainability. As this is a fast evolving field, I aim to update this article regularly.