In 2016, AI became part of China's national technology development program to boost AI research and development and enter formally the race to become a leading AI nation. That China has made tremendous progress highlights a report published by Tsinghua University. According to the report, "China leads the world in AI papers, has become the largest owner of AI patents, has the world's second largest AI talent pool, and the highest venture investment in AI." China is running a neck-and-neck race with the United States, followed by countries like Japan and South Korea. Since 2018, however, a debate has also been underway in China about ethical and regulatory questions concerning the use of AI.
LinkedIn founder Reid Hoffman is one of a host of investors bank-rolling a new initiative to develop ethics and governance standards for artificial intelligence (AI), reports Telecoms.com. The $27 million Ethics and Governance of Artificial Intelligence Fund, which also features Omidyar Network as a founder, will be built around not only engineers and corporations, but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers, with the intention of defining standards for AI both in the US and internationally. The team will aim to address such areas as ethical frameworks, moral values, accountability and social impact. "Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that," said Alberto Ibargüen, President of Knight Foundation, which has committed $5 million to the initiative.
We know artificial intelligence will remake -- is already in the process of remaking -- both business and the broader world beyond. What we don't know yet is what unintended consequences AI will wreak as it becomes more advanced and commonplace. One hindrance to envisioning that future is that AI is not "a technology," in the same sense that ERP, for example, is a technology. While there are different flavors of ERP, with differing sets of capabilities, we generally understand that it's software designed to integrate an organization's operational and financial processes into a unified system. Artificial intelligence, though, is "a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services, and end-user adoption," as Ernst & Young (EY) put it in a recent paper.
Artificial intelligence is everywhere, at times obscured and sometimes fully hidden. It lurks in the Facebook newsfeed algorithm that curates the news you see, it's being implemented in the programs of semi-autonomous vehicles that decide who lives in case of an accident, and it spectacularly beat the top Go champions in the world with its deep neural network technology. The applications of AI are evolving with increased sophistication, sparking considerable, complex questions related the social impact, governance, and ethics of its technology. These questions are particularly salient as accountability mechanisms for algorithms are yet in a nascent stage, where the balance of power is skewed towards industry giants who control these technologies. In this particular moment, the research, development, and deployment of AI is primarily taking place in the private sector, while governments around the world are increasingly contracting out their own use of these powerful technologies.
Our relationship with tech companies has changed significantly over the past 18 months. Ongoing data breaches, and the revelations surrounding the Cambridge Analytica scandal, have raised concerns about who owns our data, and how it is being used and shared. Tech companies have vowed to do better. Following his grilling by both the US Congress and the EU Parliament, Facebook CEO, Mark Zuckerberg, said Facebook will change the way it shares data with third party suppliers. There is some evidence that this is occurring, particularly with advertisers.