The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.
At the perfect intersection of technology and civil service, every government process will be an automated one, streamlining benefits, outcomes, and applications for every citizen within a digitally-enabled country. With that approach comes a significant layer of protocol that is necessary to ensure citizens feel empowered regarding decision-making processes and how their government addresses needs from a digital perspective. Right now, Canada is leading the world in AI, thanks largely to huge government investments like the Pan-Canadian Artificial Intelligence Strategy. The growing field is pervasive right now--there is hardly an industry it has not disrupted, from mining to legal aid. In fact, government might be one of the most obvious choices as to where automated decision processes can save time and money.
This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.
We live in the digital world, where every day we interact with digital systems either through a mobile device or from inside a car. These systems are increasingly autonomous in making decisions over and above their users or on behalf of them. As a consequence, ethical issues--privacy ones included (for example, unauthorized disclosure and mining of personal data, access to restricted resources)--are emerging as matters of utmost concern since they affect the moral rights of each human being and have an impact on the social, economic, and political spheres. Europe is at the forefront of the regulation and reflections on these issues through its institutional bodies. Privacy with respect to the processing of personal data is recognized as part of the fundamental rights and freedoms of individuals.
An independent AI ethics research centre is set to receive $7.5 million of funding courtesy of the folks at Facebook. The new research centre is called the Institute for Ethics in Artificial Intelligence and was created in collaboration with the Technical University of Munich (TUM). Facebook, like many companies, is fighting outside concerns about the development of AI and its potential societal impact. The centre should help to ensure Facebook keeps up with ethical best practices. "At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do -- from the data labels we use, to the individual algorithms we build, to the systems they are a part of. We're developing new tools like Fairness Flow, which can help generate metrics for evaluating whether there are unintended biases in certain models. We also work with groups like the Partnership for AI, of which Facebook is a founding member, and the AI4People initiative. However, AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics."