Gizmodo is 20 years old! To celebrate the anniversary, we're looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11, I felt spiritually and existentially lost. It's hard to believe now, but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I haven't set foot in a church since, aside from the occasional wedding or baptism. I didn't realize it at the time, but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didn't survive this mental reboot and return to form, but it did lead me to some very positive places, resulting in my adoption of secular Buddhism, meditation, and a decade-long stint with vegetarianism.
Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.
Like the European Union (EU)'s General Data Protection Regulation (GDPR) that entered into force in 2016, the upcoming Artificial Intelligence (AI) Act will have extraterritorial scope and global impact. Considering the AI Act's broad scope and the financial risks relating to non-compliance, businesses must prepare for these future regulatory changes now and proactively take the initiatives to comply with best practices early on, according to a new whitepaper by Swiss data services company Unit8. The paper, titled Upcoming AI Regulation: What to expect and how to prepare, delves into the EU's forthcoming AI Act, providing insights into the future development of AI regulation in Europe and the potential implications for organizations worldwide. The European Commission (EC) unveiled a proposal for a legal framework on AI in April 2021, seeking to address risks of specifically created by AI applications, proposing a list of high risk applications, setting clear requirements for AI systems for high risk applications and defining specific obligations for AI users and providers of high risk applications. The proposed rules also propose a conformity assessment method for AI systems, propose enforcement after an AI system is placed in the market, and propose a governance structure at European and national level.
A governance paradigm called "responsible AI" describes how a particular organization handles the ethical and legal issues around artificial intelligence (AI). Liable AI projects are primarily motivated by the need to clarify who is responsible if something goes wrong. The data scientists and software engineers who create and implement an organization's AI algorithmic models are responsible for developing appropriate, reliable AI standards. This indicates that each organization has different requirements for the procedures needed to stop prejudice and ensure transparency. Supporters of responsible AI believe that a widely accepted governance framework of AI best practices will make it simpler for organizations worldwide to ensure that their AI programming is human-centered, interpretable, and explainable, much like ITIL provided a common framework for delivering IT services.
Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions. Accordingly,enterprises are increasingly embracing this dynamic technology. A 2022 global study by IBM found that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a 2021 PwC study the COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises' impact on workforce planning, supply chain resilience, and demand projection.
The details of the United Kingdom's data protection reform plans are solidifying with the release of the first public version of the Data Protection and Digital Information Bill (DPDIB), and the government has accompanied this with a set of new proposals for AI regulation. The new data protection reform bill is the first concrete shape of a new regulatory framework for the country as it breaks off from terms established under the EU's General Data Protection Regulation (GDPR), emerging from a consultation process that ran for nearly a year. The new AI regulation proposals consist of six core principles that attempt to balance consumer and general safety concerns with the needs and wants of the UK's $4.6 billion AI sector. The new data protection reform bill is the next step in the UK's gradual process of breaking entirely with the GDPR in the wake of "Brexit," with the current governing Data Protection Act 2018 largely mirroring those terms. The UK government has expressed a desire to set terms that are more business-friendly, but has to walk a careful path to avoid being considered an "inadequate" data exchange partner by the EU due to lack of GDPR parity.
AI is being devised without sufficient regard for exceptions, a worrying trend for society. They say that there is an exception to every rule. The problem though is that oftentimes the standing rule prevails and there is little or no allowance for an exception to be acknowledged nor entertained. The average-case is used despite the strident possibility that an exception is at the fore. An exception doesn't get any airtime. It doesn't get a chance to be duly considered. I'm sure you must know what I am talking about.
"How do you get a girlfriend?" This exchange would be pretty familiar in the more squalid corners of the internet, but it might surprise most readers to find out that the misogynistic response here was written by an A.I. Recently, a YouTuber in the A.I. community posted a video that explains how he trained an A.I. language model called "GPT-4chan" on the /pol/ board of 4chan, a forum filled with hate speech, racism, sexism, anti-Semitism, and any other offensive content one can imagine. The model was made by fine-tuning the open-source language model GPT-J (not to be confused with the more familiar GPT-3 from OpenAI). Having its language trained by the most vitriolic teacher possible, the designer then unleashed the A.I. on the forum, where it engaged with users and made over 30,000 posts (about 15,000 posted in a single day, which was 10 percent of all posts that day). "By taking away the rights of women" was just one example of GPT-4chan's responses to poster's questions.
It has been around in our cultures one form or another since the times of the Ancient Greeks and their myths, through to Frankenstein, and Asimov. This long and storied history cannot take away from the fact that AI is now front and center in our world. AI technology is both for Ericsson and our customers a key business enabler. Looking back at the history of AI, we see a recurring theme. Using AI wrongly, or without due diligence can lead to a widespread escalation of problems on many fronts.