Goto

Collaborating Authors

Results


The Importance of International Norms in Artificial Intelligence Ethics

#artificialintelligence

DALL-E 2, an image-generating artificial intelligence (AI) has captured the public's attention with stunning portrayals of Godzilla-eating Tokyo and photorealistic images of astronauts riding horses in space. The model is the newest iteration of a text-to-image algorithm, an AI model that can generate images based on text descriptions. OpenAI, the company behind DALL-E 2, used a language model, GPT-3, and a computer vision model, CLIP, to train DALL-E 2 using 650 million images with associated text captions. The integration of these two models made it possible for OpenAI to train DALL-E 2 to generate a vast array of images in many different styles. Despite DALL-E 2's impressive accomplishments, there are significant issues with how the model portrays people and how it has acquired biases from the data it was trained on.


Top Explainable AI Frameworks For Transparency in Artificial Intelligence

#artificialintelligence

Our daily lives are being impacted by artificial intelligence (AI) in several ways. Artificial assistants, predictive models, and facial recognition systems are practically ubiquitous. Numerous sectors use AI, including education, healthcare, automobiles, manufacturing, and law enforcement. The judgments and forecasts provided by AI-enabled systems are becoming increasingly more significant and, in many instances, vital to survival. This is particularly true for AI systems used in healthcare, autonomous vehicles, and even military drones.


Responses to Jack Clark's AI Policy Tweetstorm

#artificialintelligence

Artificial intelligence guru Jack Clark has written the longest, most interesting Twitter thread on AI policy that I've ever read. After a brief initial introductory tweet on August 6, Clark went on to post an additional 79 tweets in this thread. It was a real tour de force. Because I'm currently finishing up a new book on AI governance, I decided to respond to some of his thoughts on the future of governance for artificial intelligence (AI) and machine learning (ML). Clark is a leading figure in the field of AI science and AI policy today. He is the co-founder of Anthropic, an AI safety and research company, and he previously served as the Policy Director of OpenAI. So, I take seriously what he has to say on AI governance matters and really learned a lot from his tweetstorm. But I also want to push back on a few things. Specifically, several of the issues that Clark raises about AI governance are not unique to AI per se; they are broadly applicable to many other emerging technology sectors, and even some traditional ones. Below, I will refer to this as my "general critique" of Clark's tweetstorm. On the other hand, Clark correctly points to some issues that are unique to AI/ML and which really do complicate the governance of computational systems.


Practicing Responsible Artificial Intelligence (AI)

#artificialintelligence

Democratization of technology and the pandemic have fueled adoption of AI/ML technologies across the public sector. Several public health agencies have leveraged AI/ML technologies to harness the power of data driven intelligence to transform several aspects of community healthcare including the identification of vulnerable populations, patient engagement, optimization of care quality, delivery of personalized interventions, and elimination of fraudulent transactions. While these AI-enabled initiatives have generated new insights and enabled the agencies to improve outcomes, they have also raised concerns regarding the ethical principles and values in AI/ML adoption. There is a renewed focus on ensuring trust, fairness, privacy, accountability, and transparency throughout experimentation to industrialization of AI initiatives. Governance is a critical aspect of AI/ML adoption.


Ethics of AI

#artificialintelligence

Disclaimer: this text expresses the opinions of a student, researcher, and engineer who studies and works in the field of Artificial Intelligence in the Netherlands. I think the contents are not as nuanced as they could be, but the text is informed -- in a way, it is just my opinion. Allow me then to begin by iterating Wittgensteins' de facto sentence with which he ends his first treaty in philosophy, Tractatus Logico-Philosophicus: "Whereof one cannot speak thereof one must remain silent"[7]. The problem with Ethics of AI, put succinctly, is the demand for morally-based changes to an empirical scientific field -- the field of AI or Computer Science. These changes have been easily justified in AI due to its engineering counterpart -- one of the fastest growing and most productive technological fields at the moment whose range of possible reforms threatens every social dimension. Most of these changes, for better and for worst, have been demanded by the political class and for the most part only in the West. The aim of this article is not to take any part in the political discussion, although this might be impossible by definition -- after all, everything is political. It is still important to attempt to disentangle the views expressed here-in from those barked in the political sphere. The very root of the problem is linked to the over-politicization, indeed, perhaps even radicalization of systems that are not political by nature, like Science. The problem, that a scientific field has been mixed-up with its applications in industry -- is a prominent one.


Pro-business AI regulations need to be global

#artificialintelligence

There is little doubt that artificial intelligence and machine learning will revolutionise decision-making. But how these new technologies make decisions is a mystery and the black art that goes on behind the scenes to deliver those decisions is based on mathematical models that cannot easily be explained. AI relies on accurate data, but data protection regulations can sometimes act as a barrier to prevent the access required to train algorithms with more diverse use cases. Without this diversity, the dataset is stymied by only including data from individuals who have opted in to sharing their personal information. Such data mining could improve the accuracy of the data models used in machine learning.


Artificial Intelligence Act: will the EU's AI regulation set an example?

#artificialintelligence

When Microsoft unleashed Tay, its AI-powered chatbot, on Twitter on 23 March 2016, the software giant's hope was that it would "engage and entertain people… through casual and playful conversation". An acronym for'thinking about you', Tay was designed to mimic the language patterns of a 19-year-old American girl and learn by interacting with human users on the social network. Within hours, things had gone badly wrong. Trolls tweeted politically incorrect phrases at the bot in a bid to manipulate its behaviour. Sure enough, Tay started spewing out racist, sexist and other inflammatory messages to its following of more than 100,000 users. Microsoft was forced to lock the @TayandYou account indefinitely less than a day later, but not before its creation had tweeted more than 96,000 times.


The Coming AI Hackers

#artificialintelligence

Artificial intelligence--AI--is an information technology. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don't. It will hack our society to a degree and effect unlike anything that's come before. I mean this in two very different ways. One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage. Okay, maybe it's a bit of hyperbole, but none of this requires far-future science-fiction technology. I'm not postulating any "singularity," where the AI-learning feedback loop becomes so fast that it outstrips human understanding. My scenarios don't require evil intent on the part of anyone. We don't need malicious AI systems like Skynet (Terminator) or the Agents (Matrix). Some of the hacks I will discuss don't even require major research breakthroughs. They'll improve as AI techniques get more sophisticated, but we can see hints of them in operation today. This hacking will come naturally, as AIs become more advanced at learning, understanding, and problem-solving. In this essay, I will talk about the implications of AI hackers. First, I will generalize "hacking" to include economic, social, and political systems--and also our brains. Next, I will describe how AI systems will be used to hack us. Then, I will explain how AIs will hack the economic, social, and political systems that comprise society. Finally, I will discuss the implications of a world of AI hackers, and point towards possible defenses. It's not all as bleak as it might sound. Caper movies are filled with hacks. Hacks are clever, but not the same as innovations. Systems tend to be optimized for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimization Systems tend be rigid. Systems limit what we can do and invariably, some of us want to do something else. But enough of us are. Hacking is normally thought of something you can do to computers. But hacks can be perpetrated on any system of rules--including the tax code. But you can still think of it as "code" in the computer sense of the term. It's a series of algorithms that takes an input--financial information for the year--and produces an output: the amount of tax owed. It's deterministic, or at least it's supposed to be.


The Explainable AI Imperative Amid Global AI Regulation

#artificialintelligence

The General Data Protection Regulation (GDPR) was a big first step toward giving consumers control of their data. As powerful as this privacy initiative is, a new personal data challenge has emerged. Now, privacy concerns are focused on what companies are doing with data once they have it. This is due to the rise of artificial intelligence (AI) as neural networks accelerate the exploitation of personal data and raise new questions about the need for further regulation and safeguarding of privacy rights. Core to the concern about data privacy are the algorithms used to develop AI models.


The Existential Threat of AI-Enhanced Disinformation Operations

#artificialintelligence

A recent Washington Post article about artificial intelligence (AI) briefly caught the publics' attention. A former engineer working for Google's Responsible AI organization went public with his belief that the company's chatbot was sentient. It should be stated bluntly: this AI is not a conscious entity. It is a large language model trained indiscriminately from Internet text that uses statistical patterns to predict the most probable sequence of words. While the tone of the Washington Post piece conjured all the usual Hollywood tropes related to humanity's fear of sentient technology (e.g., storylines from Ex Machina, Terminator, or 2001: A Space Odyssey), it also inadvertently highlighted an uncomfortable truth: As AI capabilities continue to improve, they will become increasingly effective tools for manipulating and fooling humans.