This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.
In recent years, the availability of massive data sets and improved computing power have driven the advent of cutting-edge machine learning algorithms. However, this trend has triggered growing concerns associated with its ethical issues. In response to such a phenomenon, this study proposes a feasible solution that combines ethics and computer science materials in artificial intelligent classrooms. In addition, the paper presents several arguments and evidence in favor of the necessity and effectiveness of this integrated approach.
The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.
AI promises great opportunity, and with that comes great responsibility for government and enterprise leaders alike. In the last year, there has been an ever-increasing velocity of articles, blogs, speeches and thinking raising ethical concerns about AI. What are the national or international economic policy changes we need to make to reduce the potential for disruption in specific geographies or economic sectors? What are the risks for labor displacement? What specific types of job training and counseling programs will be effective in helping people adapt to a new age of machines as co-workers and new jobs created as a result of AI? What new levels of protection should be introduced to safeguard not just an individual's data but also the data personas of individuals?
Who should be on the ethics board of a tech company that's in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google's proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it's crucial to get to the bottom of this question. Google, for one, admitted it's "going back to the drawing board." Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That's why they're publishing vision documents like "Principles for A.I." when they haven't done anything comparable for previous technologies.