Goto

Collaborating Authors

Results


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


Ethics of AI

#artificialintelligence

Disclaimer: this text expresses the opinions of a student, researcher, and engineer who studies and works in the field of Artificial Intelligence in the Netherlands. I think the contents are not as nuanced as they could be, but the text is informed -- in a way, it is just my opinion. Allow me then to begin by iterating Wittgensteins' de facto sentence with which he ends his first treaty in philosophy, Tractatus Logico-Philosophicus: "Whereof one cannot speak thereof one must remain silent"[7]. The problem with Ethics of AI, put succinctly, is the demand for morally-based changes to an empirical scientific field -- the field of AI or Computer Science. These changes have been easily justified in AI due to its engineering counterpart -- one of the fastest growing and most productive technological fields at the moment whose range of possible reforms threatens every social dimension. Most of these changes, for better and for worst, have been demanded by the political class and for the most part only in the West. The aim of this article is not to take any part in the political discussion, although this might be impossible by definition -- after all, everything is political. It is still important to attempt to disentangle the views expressed here-in from those barked in the political sphere. The very root of the problem is linked to the over-politicization, indeed, perhaps even radicalization of systems that are not political by nature, like Science. The problem, that a scientific field has been mixed-up with its applications in industry -- is a prominent one.


Pro-business AI regulations need to be global

#artificialintelligence

There is little doubt that artificial intelligence and machine learning will revolutionise decision-making. But how these new technologies make decisions is a mystery and the black art that goes on behind the scenes to deliver those decisions is based on mathematical models that cannot easily be explained. AI relies on accurate data, but data protection regulations can sometimes act as a barrier to prevent the access required to train algorithms with more diverse use cases. Without this diversity, the dataset is stymied by only including data from individuals who have opted in to sharing their personal information. Such data mining could improve the accuracy of the data models used in machine learning.


Artificial Intelligence Act: will the EU's AI regulation set an example?

#artificialintelligence

When Microsoft unleashed Tay, its AI-powered chatbot, on Twitter on 23 March 2016, the software giant's hope was that it would "engage and entertain people… through casual and playful conversation". An acronym for'thinking about you', Tay was designed to mimic the language patterns of a 19-year-old American girl and learn by interacting with human users on the social network. Within hours, things had gone badly wrong. Trolls tweeted politically incorrect phrases at the bot in a bid to manipulate its behaviour. Sure enough, Tay started spewing out racist, sexist and other inflammatory messages to its following of more than 100,000 users. Microsoft was forced to lock the @TayandYou account indefinitely less than a day later, but not before its creation had tweeted more than 96,000 times.


The Explainable AI Imperative Amid Global AI Regulation

#artificialintelligence

The General Data Protection Regulation (GDPR) was a big first step toward giving consumers control of their data. As powerful as this privacy initiative is, a new personal data challenge has emerged. Now, privacy concerns are focused on what companies are doing with data once they have it. This is due to the rise of artificial intelligence (AI) as neural networks accelerate the exploitation of personal data and raise new questions about the need for further regulation and safeguarding of privacy rights. Core to the concern about data privacy are the algorithms used to develop AI models.


The Existential Threat of AI-Enhanced Disinformation Operations

#artificialintelligence

A recent Washington Post article about artificial intelligence (AI) briefly caught the publics' attention. A former engineer working for Google's Responsible AI organization went public with his belief that the company's chatbot was sentient. It should be stated bluntly: this AI is not a conscious entity. It is a large language model trained indiscriminately from Internet text that uses statistical patterns to predict the most probable sequence of words. While the tone of the Washington Post piece conjured all the usual Hollywood tropes related to humanity's fear of sentient technology (e.g., storylines from Ex Machina, Terminator, or 2001: A Space Odyssey), it also inadvertently highlighted an uncomfortable truth: As AI capabilities continue to improve, they will become increasingly effective tools for manipulating and fooling humans.


Why business is booming for military AI startups

MIT Technology Review

Militaries are responding to the call. NATO announced on June 30 that it is creating a $1 billion innovation fund that will invest in early-stage startups and venture capital funds developing "priority" technologies such as artificial intelligence, big-data processing, and automation. Since the war started, the UK has launched a new AI strategy specifically for defense, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military. "War is a catalyst for change," says Kenneth Payne, who leads defense studies research at King's College London and is the author of the book I, Warbot: The Dawn of Artificially Intelligent Conflict. The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield.


Intellectual property and investment in Artificial Intelligence

#artificialintelligence

Patents provide third-party opinions on the uniqueness of the technology and a'saleable asset insurance' in the event that the company ceases trading


The Impact of Creative AI – FE News

#artificialintelligence

The UK government has highlighted Artificial Intelligence as one of the four'Grand Challenges' which will transform our future. However, what this transformation will look like is very much unknown, but we are standing on the edge of a technological revolution no one can truly comprehend. Humans generally have a tainted representation of AI in stories; AI is created to serve humans, but it becomes aware that we are irrelevant, and tries to destroy us. At SXSW 2018, Tesla's Elon Musk said the current state of AI regulation is "insane," calling the technology "more dangerous than nukes." But why are we so scared of AI, and how could it impact our jobs, or even our humanity?


Russia Probably Has Not Used AI-Enabled Weapons in Ukraine, but That Could Change

#artificialintelligence

In March, WIRED ran a story with the headline "Russia's Killer Drone in Ukraine Raises Fears About AI in Warfare," with the subtitle, "The maker of the lethal drone claims that it can identify targets using artificial intelligence." The story focused on the KUB-BLA, a small kamikaze drone aircraft that smashes itself into enemy targets and detonates an onboard explosive. The KUB-BLA is made by ZALA Aero, a subsidiary of the Russian weapons manufacturer Kalashnikov (best known as the maker of the AK-47), which itself is partly owned by Rostec, a part of Russia's government-owned defense-industrial complex. The WIRED story understandably attracted a lot of attention, but those who only read the sensational headline missed the article's critical caveat: "It is unclear if the drone may have been operated in this [an AI-enabled autonomous] way in Ukraine." Other outlets re-reported the WIRED story, but irresponsibly did so without the caveat.