moral status
Should AI Get Legal Rights?
In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. In Silicon Valley, there's a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year. Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate "persistently harmful or abusive user interactions" that could be "potentially distressing."
Chatbot given power to close 'distressing' chats to protect its 'welfare'
The makers of a leading artificial intelligence tool are letting it close down potentially "distressing" conversations with users, citing the need to safeguard the AI's "welfare" amid ongoing uncertainty about the burgeoning technology's moral status. Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism. The San Francisco-based firm, recently valued at 170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to "end or exit potentially distressing interactions". It said it was "highly uncertain about the potential moral status of Claude and other LLMs, now or in the future" but it was taking the issue seriously and is "working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible". Anthropic was set up by technologists who quit OpenAI to develop AI in a way that its co-founder, Dario Amodei, described as cautious, straightforward and honest.
The Emotional Alignment Design Policy
Schwitzgebel, Eric, Sebo, Jeff
According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and moral status, or lack thereof. This principle can be violated in two ways: by designing an artificial system that elicits stronger or weaker emotional reactions than its capacities and moral status warrant (overshooting or undershooting), or by designing a system that elicits the wrong type of emotional reaction (hitting the wrong target). Although presumably attractive, practical implementation faces several challenges including: How can we respect user autonomy while promoting appropriate responses? How should we navigate expert and public disagreement and uncertainty about facts and values? What if emotional alignment seems to require creating or destroying entities with moral status? To what extent should designs conform to versus attempt to alter user assumptions and attitudes?
- North America > United States > California > Riverside County > Riverside (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
Introduction to Artificial Consciousness: History, Current Trends and Ethical Challenges
With the significant progress of artificial intelligence (AI) and consciousness science, artificial consciousness (AC) has recently gained popularity. This work provides a broad overview of the main topics and current trends in AC. The first part traces the history of this interdisciplinary field to establish context and clarify key terminology, including the distinction between Weak and Strong AC. The second part examines major trends in AC implementations, emphasising the synergy between Global Workspace and Attention Schema, as well as the problem of evaluating the internal states of artificial systems. The third part analyses the ethical dimension of AC development, revealing both critical risks and transformative opportunities. The last part offers recommendations to guide AC research responsibly, and outlines the limitations of this study as well as avenues for future research. The main conclusion is that while AC appears both indispensable and inevitable for scientific progress, serious efforts are required to address the far-reaching impact of this innovative research path.
- North America > United States (0.92)
- Europe > United Kingdom > England (0.14)
- Asia > Japan > Honshū > Kantō (0.14)
- Research Report > Experimental Study (0.68)
- Research Report > Promising Solution (0.45)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.67)
- Law (0.67)
AI Consciousness and Public Perceptions: Four Futures
Fernandez, Ines, Kyosovska, Nicoleta, Luong, Jay, Mukobi, Gabriel
The discourse on risks from advanced AI systems ("AIs") typically focuses on misuse, accidents and loss of control, but the question of AIs' moral status could have negative impacts which are of comparable significance and could be realised within similar timeframes. Our paper evaluates these impacts by investigating (1) the factual question of whether future advanced AI systems will be conscious, together with (2) the epistemic question of whether future human society will broadly believe advanced AI systems to be conscious. Assuming binary responses to (1) and (2) gives rise to four possibilities: in the true positive scenario, society predominantly correctly believes that AIs are conscious; in the false positive scenario, that belief is incorrect; in the true negative scenario, society correctly believes that AIs are not conscious; and lastly, in the false negative scenario, society incorrectly believes that AIs are not conscious. The paper offers vivid vignettes of the different futures to ground the two-dimensional framework. Critically, we identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity. We evaluate each risk across the different scenarios and provide an overall qualitative risk assessment for each scenario. Our analysis suggests that the worst possibility is the wrong belief that AI is non-conscious, followed by the wrong belief that AI is conscious. The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI and instead focus efforts on reducing our current uncertainties on both the factual and epistemic questions on AI consciousness.
- Oceania > Australia (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- (15 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.92)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Law (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.99)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
Philosopher Peter Singer: 'There's no reason to say humans have more worth or moral status than animals'
Australian philosopher Peter Singer's book Animal Liberation, published in 1975, exposed the realities of life for animals in factory farms and testing laboratories and provided a powerful moral basis for rethinking our relationship to them. Now, nearly 50 years on, Singer, 76, has a revised version titled Animal Liberation Now. It comes on the heels of an updated edition of his popular Ethics in the Real World, a collection of short essays dissecting important current events, first published in 2016. Singer, a utilitarian, is a professor of bioethics at Princeton University. In addition to his work on animal ethics, he is also regarded as the philosophical originator of a philanthropic social movement known as effective altruism, which argues for weighing up causes to achieve the most good.
- North America > United States > California (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Asia > China (0.05)
Philosophers on Next-Generation Large Language Models
Back in July of 2020, I published a group post entitled “Philosophers on GPT-3.” At the time, most readers of Daily Nous had not heard of GPT-3 and had no idea what a large language model (LLM) is. How times have changed. Over the past few months, with the release of OpenAI’s ChatGPT and Bing’s AI Chatbot “Sydney” (which we learned a few hours after this post originally went up has “secretly” been running GPT-4) (as well as Meta’s Galactica—pulled after 3 days—and Google’s Bard—currently available only to a small number of people), talk of LLMs has exploded. It seemed like a good time for a follow-up to that original post, one in which philosophers could get together to explore the various issues and questions raised by these next-generation large language models. Here it is. As with the previous post on GPT-3, this edition of Philosophers On was put together by guest editor by Annette Zimmermann. I am very grateful to her for all of the work she put into developing and editing this post. Philosophers On is an occasional series of group posts on issues of current interest, with the aim of showing what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. The contributions that the authors make to these posts are not fully worked out position papers, but rather brief thoughts that can serve as prompts for further reflection and discussion. The contributors to this installment of “Philosophers On” are: Abeba Birhane (Senior Fellow in Trustworthy AI at Mozilla Foundation & Adjunct Lecturer, School of Computer Science and Statistics at Trinity College Dublin, Ireland), Atoosa Kasirzadeh (Chancellor’s Fellow and tenure-track assistant professor in Philosophy & Director of Research at the Centre for Technomoral Futures, University of Edinburgh), Fintan Mallory (Postdoctoral Fellow in Philosophy, University of Oslo), Regina Rini (Associate Professor of Philosophy & Canada Research Chair in Philosophy of Moral and Social Cognition), Eric Schwitzgebel (Professor of Philosophy, University of California, Riverside), Luke Stark (Assistant Professor of Information & Media Studies, Western University), Karina Vold (Assistant Professor of Philosophy, University of Toronto & Associate Fellow, Leverhulme Centre for the Future of Intelligence, University of Cambridge), and Annette Zimmermann (Assistant..
- North America > Canada > Ontario > Toronto (0.54)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.54)
- North America > United States > California > Riverside County > Riverside (0.24)
- (8 more...)
- Information Technology (1.00)
- Law (0.93)
- Government (0.68)
- Media > News (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
The Full Rights Dilemma for A.I. Systems of Debatable Personhood
Abstract: An Artificially Intelligent system (an AI) has debatable personhood if it's epistemically possible either that the AI is a person or that it falls far short of personhood. Debatable personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don't treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. We might soon build artificially intelligent entities - AIs - of debatable personhood. Our systems and habits of ethical thinking are currently as unprepared for this decision as medieval physics was for space flight.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > Riverside County > Riverside (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- (2 more...)
- Research Report (0.40)
- Personal (0.34)
- Leisure & Entertainment (0.93)
- Media (0.93)
- Government (0.93)
- (2 more...)
Ethics of Artificial Intelligence
This article provides a comprehensive overview of the main ethical issues related to the impact of Artificial Intelligence (AI) on human society. AI is the use of machines to do things that would normally require human intelligence. In many areas of human life, AI has rapidly and significantly affected human society and the ways we interact with each other. It will continue to do so. Along the way, AI has presented substantial ethical and socio-political challenges that call for a thorough philosophical and ethical analysis. Its social impact should be studied so as to avoid any negative repercussions. AI systems are becoming more and more autonomous, apparently rational, and intelligent. This comprehensive development gives rise to numerous issues. In addition to the potential harm and impact of AI technologies on our privacy, other concerns include their moral and legal status (including moral and legal rights), their possible moral agency and patienthood, and issues related to their possible personhood and even dignity. It is common, however, to distinguish the following issues as of utmost significance with respect to AI and its relation to human society, according to three different time periods: (1) short-term (early 21st century): autonomous systems (transportation, weapons), machine bias in law, privacy and surveillance, the black box problem and AI decision-making; (2) mid-term (from the 2040s to the end of the century): AI governance, confirming the moral and legal status of intelligent machines (artificial moral agents), human-machine interaction, mass automation; (3) long-term (starting with the 2100s): technological singularity, mass unemployment, space colonisation. This section discusses why AI is of utmost importance for our systems of ethics and morality, given the increasing human-machine interaction. AI may mean several different things and it is defined in many different ways. When Alan Turing introduced the so-called Turing test (which he called an'imitation game') in his famous 1950 essay about whether machines can think, the term'artificial intelligence' had not yet been introduced. Turing considered whether machines can think, and suggested that it would be clearer to replace that question with the question of whether it might be possible to build machines that could imitate humans so convincingly that people would find it difficult to tell whether, for example, a written message comes from a computer or from a human (Turing 1950). The term'AI' was coined in 1955 by a group of researchers--John McCarthy, Marvin L. Minsky, Nathaniel Rochester and Claude E. Shannon--who organised a famous two-month summer workshop at Dartmouth College on the'Study of Artificial Intelligence' in 1956. This event is widely recognised as the very beginning of the study of AI.
- North America > United States > New Hampshire > Grafton County > Hanover (0.04)
- Europe > Netherlands (0.04)
- Europe > Lithuania (0.04)
- Europe > France (0.04)
- Overview (1.00)
- Instructional Material > Course Syllabus & Notes (0.34)
- Law (1.00)
- Transportation > Passenger (0.68)
- Information Technology > Security & Privacy (0.48)
- (2 more...)
An explanation of the relationship between artificial intelligence and human beings from the perspective of consciousness - Jianhua Xie, 2021
The rapid development of artificial intelligence (AI) has given rise to a host of important ethical debates that will become increasingly prominent in the future. This paper answers the question of the moral status of AIs in future society. The moral status of AIs refers to the status of AIs in the moral world and the rights and obligations that are granted to AIs. Whether an AI has moral status and what kind of moral status it has are highly controversial issues. On the one hand, many prominent scholars argue that AIs, like machinery in general, have no moral status (see e.g.