Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.

Ethical Concerns of AI


Artificial Intelligence is seen by many as a great transformative tech. These questions make people shift from thinking purely about the functional capabilities to the ethics behind creating such powerful and potentially life-consequential technologies. As such, it makes sense to spend time considering what we want these systems to do and make sure we address ethical questions now so that we build these systems with the common good of humanity in mind. Will AI replace human workers? The most immediate concern for many is that AI-enabled systems will replace workers across a wide range of industries.

Why We Should Be Careful When Developing AI


Artificial intelligence offers a lot of advantages for organisations by creating better and more efficient organisations, improving customer services with conversational AI and reducing a wide variety of risks in different industries. Although we are only at the start of the AI revolution, we can already see that artificial intelligence will have a profound effect on our lives, both positively and negatively. The financial impact of AI on the global economy is estimated to reach US$15.7 trillion by 2030, with 40% of jobs expected to be lost due to artificial intelligence, and global venture capital investment in AI is growing to greater than US$27 billion in 2018. Such estimates of AI potential relate to a broad understanding of its nature and applicability. AI will eventually consist of entirely novel and unrecognisable forms of intelligence, and we can see the first signals of this in the rapid developments of AI. In 2017, Google's Deepmind developed AlphaGo Zero, an AI agent that learned the abstract strategy board game Go with a far more expansive range of moves than chess. Within three days, by playing thousands of games against itself, and without the requirement of large volumes of data (which would normally be required in developing AI), the AI agent beat the original AlphaGo, an algorithm that had beaten 18-time world champion Lee Sedol.

Taking responsibility for responsible AI


Artificial Intelligence (AI) affords a tremendous opportunity not only to increase efficiencies and reduce costs, but to help rethink businesses and solve critical problems. Yet for all the promises AI holds, there's an equal amount of anxiety, across economies and societies. Many people feel that advanced technologies will bring profound changes that are predestined and inevitable. As AI becomes more sophisticated, it will start to make or assist decisions that have a greater impact on individual lives. This will raise ethical challenges as people adjust to the larger and more prominent role of automated decision making in society.

Translation: Excerpts from China's 'White Paper on Artificial Intelligence Standardization'


This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.