The topic du jour for tech regulation is not as you might expect, data, but rather a sexy new topic of interest to policymakers – artificial intelligence (AI). It has all the glamour of Hollywood movies, all the fear that propels despots to power, and it comes complete with simple sentences and graphics that make it a Trumpian communicator's dream. We get tired of hearing it, but it's so true: technology is changing rapidly. The speed of change continues to accelerate and, let's face it, regulators and policymakers do a poor job of understanding technology, much less creating effective regulation for it. In the chaos however, there are repetitive patterns.
In 2016, AI became part of China's national technology development program to boost AI research and development and enter formally the race to become a leading AI nation. That China has made tremendous progress highlights a report published by Tsinghua University. According to the report, "China leads the world in AI papers, has become the largest owner of AI patents, has the world's second largest AI talent pool, and the highest venture investment in AI." China is running a neck-and-neck race with the United States, followed by countries like Japan and South Korea. Since 2018, however, a debate has also been underway in China about ethical and regulatory questions concerning the use of AI.
As of late, a number of hot topics have arisen in data policy, notably: how to ensure data privacy for individuals; the role of government in the regulation of technology; and how best to effectively and ethically leverage big data. At the forefront of these discussions is regulation of artificial intelligence (AI). As governments race to regulate AI, they should proceed with caution and seek to balance the needs of society and the private sector. Despite its recent prevalence in public discussion, AI is not a new topic. Industry leaders, such as Jonathan Zittrain, have commented on the generative Internet and how such systems are facilitating new kinds of control.
LinkedIn founder Reid Hoffman is one of a host of investors bank-rolling a new initiative to develop ethics and governance standards for artificial intelligence (AI), reports Telecoms.com. The $27 million Ethics and Governance of Artificial Intelligence Fund, which also features Omidyar Network as a founder, will be built around not only engineers and corporations, but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers, with the intention of defining standards for AI both in the US and internationally. The team will aim to address such areas as ethical frameworks, moral values, accountability and social impact. "Artificial intelligence agents will impact our lives in every society on Earth. Technology and commerce will see to that," said Alberto Ibargüen, President of Knight Foundation, which has committed $5 million to the initiative.
We know artificial intelligence will remake -- is already in the process of remaking -- both business and the broader world beyond. What we don't know yet is what unintended consequences AI will wreak as it becomes more advanced and commonplace. One hindrance to envisioning that future is that AI is not "a technology," in the same sense that ERP, for example, is a technology. While there are different flavors of ERP, with differing sets of capabilities, we generally understand that it's software designed to integrate an organization's operational and financial processes into a unified system. Artificial intelligence, though, is "a diverse set of methods and tools continuously evolving in tandem with advances in data science, chip design, cloud services, and end-user adoption," as Ernst & Young (EY) put it in a recent paper.