Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it.1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations. Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance. In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.2 Although there is no uniformly agreed upon definition, AI generally is thought to refer to "machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention."3 According to researchers Shubhendu and Vijay, these software systems "make decisions which normally require [a] human level of expertise" and help people anticipate problems or deal with issues as they come up.4 As such, they operate in an intentional, intelligent, and adaptive manner. Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.
AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language . Several issues must be considered when addressing AI, including, socio-economic impacts; issues of transparency, bias, and accountability; new uses for data, considerations of security and safety, ethical issues; and, how AI facilitates the creation of new ecosystems. At the same time, in this complex field, there are specific challenges facing AI, which include: a lack of transparency and interpretability in decision-making; issues of data quality and potential bias; safety and security implications; considerations regarding accountability; and, its potentially disruptive impacts on social and economic structures. Artificial intelligence (AI) traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language.
This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.
The city of Chicago is using algorithms to try to prevent crimes before they happen. In Pittsburgh, traffic lights that use artificial intelligence (AI) have helped cut traffic times by 25 percent and idling times by 40 percent.1 Meanwhile, the European Union's real-time early detection and alert system (RED) employs AI to counter terrorism, using natural language processing (NLP) to monitor and analyze social media conversations.2 Such examples illustrate how AI can improve government services. As it continues to be enhanced and deployed, AI can truly transform this arena, generating new insights and predictions, increasing speed and productivity, and creating entirely new approaches to citizen interactions. AI in all its forms can generate powerful new abilities in areas as diverse as national security, food safety, regulation, and health care. But to fully realize these benefits, leaders must look at AI strategically and holistically. Many government organizations have only begun planning how to incorporate AI into their missions and technology. The decisions they make in the next three years could determine their success or failure well into the next decade, as AI technologies continue to evolve. It will be a challenging period.