Goto

Collaborating Authors

 ethicist


To Post or Not to Post: AI Ethics in the Age of Big Tech

Communications of the ACM

What is the role of an ethicist? Is it to be an impartial observer? A guide to what is good or bad? Here, I will explore the different roles in the context of AI ethics through the terms descriptive, normative, and action AI ethics. AI ethics is a specific field of applied ethics nested in technology ethics and computer ethics.30


Three Kinds of AI Ethics

Ratti, Emanuele

arXiv.org Artificial Intelligence

There is an overwhelming abundance of works in AI Ethics. This growth is chaotic because of how sudden it is, its volume, and its multidisciplinary nature. This makes difficult to keep track of debates, and to systematically characterize goals, research questions, methods, and expertise required by AI ethicists. In this article, I show that the relation between AI and ethics can be characterized in at least three ways, which correspond to three well-represented kinds of AI ethics: ethics and AI; ethics in AI; ethics of AI. I elucidate the features of these three kinds of AI Ethics, characterize their research questions, and identify the kind of expertise that each kind needs. I also show how certain criticisms to AI ethics are misplaced, as being done from the point of view of one kind of AI ethics, to another kind with different goals. All in all, this work sheds light on the nature of AI ethics, and sets the groundwork for more informed discussions about the scope, methods, and training of AI ethicists.


Can AI chatbots be reined in by a legal duty to tell the truth?

New Scientist

Can artificial intelligence be made to tell the truth? Probably not, but the developers of large language model (LLM) chatbots should be legally required to reduce the risk of errors, says a team of ethicists. "What we're just trying to do is create an incentive structure to get the companies to put a greater emphasis on truth or accuracy when they are creating the systems," says Brent Mittelstadt at the University of Oxford. How does ChatGPT work and do AI-powered chatbots "think" like us? LLM chatbots, such as ChatGPT, generate human-like responses to users' questions, based on statistical analysis of vast amounts of text. But although their answers usually appear convincing, they are also prone to errors – a flaw referred to as "hallucination".


Virtue Ethics For Ethically Tunable Robotic Assistants

Ramanayake, Rajitha, Nallur, Vivek

arXiv.org Artificial Intelligence

The common consensus is that robots designed to work alongside or serve humans must adhere to the ethical standards of their operational environment. To achieve this, several methods based on established ethical theories have been suggested. Nonetheless, numerous empirical studies show that the ethical requirements of the real world are very diverse and can change rapidly from region to region. This eliminates the idea of a universal robot that can fit into any ethical context. However, creating customised robots for each deployment, using existing techniques is challenging. This paper presents a way to overcome this challenge by introducing a virtue ethics inspired computational method that enables character-based tuning of robots to accommodate the specific ethical needs of an environment. Using a simulated elder-care environment, we illustrate how tuning can be used to change the behaviour of a robot that interacts with an elderly resident in an ambient-assisted environment. Further, we assess the robot's responses by consulting ethicists to identify potential shortcomings.


Towards a Feminist Metaethics of AI

Siapka, Anastasia

arXiv.org Artificial Intelligence

The proliferation of Artificial Intelligence (AI) has sparked an overwhelming number of AI ethics guidelines, boards and codes of conduct. These outputs primarily analyse competing theories, principles and values for AI development and deployment. However, as a series of recent problematic incidents about AI ethics/ethicists demonstrate, this orientation is insufficient. Before proceeding to evaluate other professions, AI ethicists should critically evaluate their own; yet, such an evaluation should be more explicitly and systematically undertaken in the literature. I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI. Contrary to traditional metaethics, which reflects on the nature of morality and moral judgements in a non-normative way, feminist metaethics expands its scope to ask not only what ethics is but also what our engagement with it should be like. Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.


Defining Interpretable Features. A summary of the findings and developed…

#artificialintelligence

In February 2022, researchers at the Data to AI (DAI) group at MIT released a paper called "The Need for Interpretable Features: Motivation and Taxonomy" [1]. In this post, I aim to summarize some of the main points and contributions of these authors and discuss some of the potential implications and critiques of their work. I highly recommend reading the original paper if you find any of this intriguing. Additionally, if you're new to Interpretable Machine Learning, I highly recommend Christopher Molnar's free book [2]. The core finding of the paper is that even with highly interpretable models like Linear Regression, non-interpretable features can result in impossible-to-understand explanations (ex. a weight of 4 on the feature x12 means nothing to most people).


Alexa could diagnose Alzheimer's and other brain conditions -- should it?

#artificialintelligence

It's an increasingly common experience: You wander into the kitchen, quietly muttering under your breath, when you hear a disembodied feminine voice say, "I'm sorry, I didn't quite catch that." We can all agree that Alexa's tendency to eavesdrop is, at times, a little creepy. But is it possible to harness that ability to improve our health? That's the question that researcher David Simon and his coauthors sought to answer in a recent paper published in Cell Press. Simon, a legal ethicist at Harvard University, and his team imagined a hypothetical near-future scenario in which Alexa came equipped with the power to diagnose cognitive conditions like Alzheimer's and dementia simply by analyzing an elder person's speech patterns.


Someone Trained an A.I. With 4chan. Yes, It Could Get Even Worse.

Slate

"How do you get a girlfriend?" This exchange would be pretty familiar in the more squalid corners of the internet, but it might surprise most readers to find out that the misogynistic response here was written by an A.I. Recently, a YouTuber in the A.I. community posted a video that explains how he trained an A.I. language model called "GPT-4chan" on the /pol/ board of 4chan, a forum filled with hate speech, racism, sexism, anti-Semitism, and any other offensive content one can imagine. The model was made by fine-tuning the open-source language model GPT-J (not to be confused with the more familiar GPT-3 from OpenAI). Having its language trained by the most vitriolic teacher possible, the designer then unleashed the A.I. on the forum, where it engaged with users and made over 30,000 posts (about 15,000 posted in a single day, which was 10 percent of all posts that day). "By taking away the rights of women" was just one example of GPT-4chan's responses to poster's questions.


We must not be kept in dark about AI

#artificialintelligence

There are many grand promises about the power of artificial intelligence. When we talk about the future of technology, AI has become so ubiquitous that many people don't even know what artificial intelligence is any more. That's particularly concerning given how advanced the technology has become and who controls it. While some might think of AI in terms of thinking robots or something in a science-fiction novel, the fact is that advanced AI already influences a great deal of our lives. From smart assistants to grammar extensions that live in our Web browsers, AI code is already embedded into the fabric of the Internet.


Ethics of AI

#artificialintelligence

Disclaimer: this text expresses the opinions of a student, researcher, and engineer who studies and works in the field of Artificial Intelligence in the Netherlands. I think the contents are not as nuanced as they could be, but the text is informed -- in a way, it is just my opinion. Allow me then to begin by iterating Wittgensteins' de facto sentence with which he ends his first treaty in philosophy, Tractatus Logico-Philosophicus: "Whereof one cannot speak thereof one must remain silent"[7]. The problem with Ethics of AI, put succinctly, is the demand for morally-based changes to an empirical scientific field -- the field of AI or Computer Science. These changes have been easily justified in AI due to its engineering counterpart -- one of the fastest growing and most productive technological fields at the moment whose range of possible reforms threatens every social dimension. Most of these changes, for better and for worst, have been demanded by the political class and for the most part only in the West. The aim of this article is not to take any part in the political discussion, although this might be impossible by definition -- after all, everything is political. It is still important to attempt to disentangle the views expressed here-in from those barked in the political sphere. The very root of the problem is linked to the over-politicization, indeed, perhaps even radicalization of systems that are not political by nature, like Science. The problem, that a scientific field has been mixed-up with its applications in industry -- is a prominent one.