Goto

Collaborating Authors

 sentient ai


It's time to prepare for AI personhood Jacy Reese Anthis

The Guardian

'Digital minds will be participants in the social contract that forms the bedrock of human society.' 'Digital minds will be participants in the social contract that forms the bedrock of human society.' It's time to prepare for AI personhood Technological advances will bring social upheaval. How will we treat digital minds, and how will they treat us? L ast month, when OpenAI released its long-awaited chatbot GPT-5, it briefly removed access to a previous chatbot, GPT-4o. Despite the upgrade, users flocked to social media to express confusion, outrage and depression.


Giving AI a voice: how does AI think it should be treated?

Fay, Maria, Flöther, Frederik F.

arXiv.org Artificial Intelligence

With the astounding progress in (generative) artificial intelligence (AI), there has been significant public discourse regarding regulation and ethics of the technology. Is it sufficient when humans discuss this with other humans? Or, given that AI is increasingly becoming a viable source of inspiration for people (and let alone the hypothetical possibility that the technology may at some point become "artificial general intelligence" and/or develop consciousness), should AI not join the discourse? There are new questions and angles that AI brings to the table that we might not have considered before - so let us make the key subject of this book an active participant. This chapter therefore includes a brief human-AI conversation on the topic of AI rights and ethics.


What Do People Think about Sentient AI?

Anthis, Jacy Reese, Pauketat, Janet V. T., Ladak, Ali, Manoli, Aikaterina

arXiv.org Artificial Intelligence

With rapid advances in machine learning, many people in the field have been discussing the rise of digital minds and the possibility of artificial sentience. Future developments in AI capabilities and safety will depend on public opinion and human-AI interaction. To begin to fill this research gap, we present the first nationally representative survey data on the topic of sentient AI: initial results from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, a preregistered and longitudinal study of U.S. public opinion that began in 2021. Across one wave of data collection in 2021 and two in 2023 (total N = 3,500), we found mind perception and moral concern for AI well-being in 2021 were higher than predicted and significantly increased in 2023: for example, 71% agree sentient AI deserve to be treated with respect, and 38% support legal rights. People have become more threatened by AI, and there is widespread opposition to new technologies: 63% support a ban on smarter-than-human AI, and 69% support a ban on sentient AI. Expected timelines are surprisingly short and shortening with a median forecast of sentient AI in only five years and artificial general intelligence in only two years. We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction and shape the future trajectory of AI technologies, including existential risks and opportunities.


Worried About Sentient AI? Consider the Octopus

TIME - Tech

As predictable as the swallows returning to Capistrano, recent breakthroughs in AI have been accompanied by a new wave of fears of some version of "the singularity," that point in runaway technological innovation at which computers become unleashed from human control. Those worried that AI is going to toss us humans into the dumpster, however, might look to the natural world for perspective on what current AI can and cannot do. Those octopi alive today are a marvel of evolution--they can mold themselves into almost any shape and are equipped with an arsenal of weapons and stealth camouflage, as well as an apparent ability to decide which to use depending on the challenge. Yet, despite decades of effort, robotics hasn't come close to duplicating this suite of abilities (not surprising since the modern octopus is the product of adaptations over 100 million generations). Robotics is a far longer way off from creating Hal.


'Mission: Impossible--Dead Reckoning' Is the Perfect AI Panic Movie

WIRED

American action movie villains have always acted as a sort of paranoia litmus test, capturing a snapshot of the particular anxieties plaguing the country and its citizens at any given time. In the 1990s and '00s, with the Red Menace long forgotten, movies leaned heavily on the awful "bad Arab" trope, pulling their villains from the Middle East. Other recent smash-'em-ups have made bad guys out of rogue spies, shadowy cyber terrorists, and self-interested arms dealers, all common players in the global news landscape. But for Mission: Impossible--Dead Reckoning Part One, out this week, writers Bruce Geller, Erik Jendresen, and Christopher McQuarrie (who also directed the movie) made their big bad--known as The Entity--out of a slightly more amorphous fear: that of an all-powerful, all-seeing, sentient AI. It has access to anything with an online network and can use those evil techno powers to manipulate everything from global military superpowers to a grandma with a gun.


AI Ethics And AI Law Fretting Over Worker Burnout In The Ardent Pursuit Of Responsible AI

#artificialintelligence

Rising need for AI Ethics workers is leading to exceedingly overworked and woefully underappreciated ... [ ] considerations. If there is one thing that we can almost all entirely agree on, I dare say it might be the abundance of worker burnout. Nary a day goes by that there aren't some blazing headlines about this worker or that worker-related burnout happening here or there. Some attribute burnout to concerns over wanting to keep their job and make a living. Others suggest that the burnout mania got especially underway when remote working became acceptable, pushing workers to potentially work nonstop and not have the conventional leave the office at 6 o'clock basis for curtailing work for the day. A slew of reasons exists and are continually bandied around for worker burnout. Those that work in the realm of Artificial Intelligence (AI) are right there in the worker burnout zone too. Yes, with all that excitement and hoopla about the present and future prospects of AI, there are humans toiling away to craft and field the AI. Software developers that specialize in making AI applications are dearly sought by companies. Once onboard, the AI programmers are bound to discover that there is a lot of AI work going on. Indeed, the odds are that a veritable fifteen pounds of AI are needed and yet the AI teams are barely able to produce five pounds given the team size and AI complexities involved.


Meta AI Unveils AI-Infused Diplomatic Charmer Which Stirs AI Ethics And AI Law Into Indelicate Tiff

#artificialintelligence

Meta AI has released a fascinating AI-infused diplomacy acting app that plays the famous board game known as Diplomacy, doing so to a level seemingly on par with human players. We take a close look and assess the AI, along with considering crucial AI Ethics and AI Law facets.


AI Ethics And AI Law Just Might Be Prodded And Goaded Into Mandating Safety Warnings On All Existing And Future AI

#artificialintelligence

Latest buzz is that AI ought to have a warning or safety sign to let humankind know they are dealing ... [ ] with AI. Your daily activities are undoubtedly bombarded with a thousand or more precautionary warnings of one kind or another. Most of those are handy and altogether thoughtful signs or labels that serve to keep us hopefully safe and secure. Please be aware that I snuck a few "outliers" on the list to make some noteworthy points. For example, some people believe it is nutty that baby strollers have an affixed label that warns you to not fold the stroller while the baby is still seated within the contraption. Though the sign is certainly appropriate and dutifully useful, it would seem that basic common sense would already be sufficient. What person would not of their own mindful volition realize that they first need to remove the baby? Well, others emphasize that such labels do serve an important purpose. First, someone might truly be oblivious that they need to remove the baby before folding up the stroller.


AI Ethics And AI Law Clarifying What In Fact Is Trustworthy AI

#artificialintelligence

Will we be able to achieve trustworthy AI, and if so, how. Trust is everything, so they say. The noted philosopher Lao Tzu said that those who do not trust enough will not be trusted. Ernest Hemingway, an esteemed novelist, stated that the best way to find out if you can trust somebody is by trusting them. Meanwhile, it seems that trust is both precious and brittle. The trust that one has can collapse like a house of cards or suddenly burst like a popped balloon. The ancient Greek tragedian Sophocles asserted that trust dies but mistrust blossoms. French philosopher and mathematician Descartes contended that it is prudent never to trust wholly those who have deceived us even once. Billionaire business investor extraordinaire Warren Buffett exhorted that it takes twenty years to build a trustworthy reputation and five minutes to ruin it. You might be surprised to know that all of these varied views and provocative opinions about trust are crucial to the advent of Artificial Intelligence (AI). Yes, there is something keenly referred to as trustworthy AI that keeps getting a heck of a lot of attention these days, including handwringing catcalls from within the field of AI and also boisterous outbursts by those outside of the AI realm. The overall notion entails whether or not society is going to be willing to place trust in the likes of AI systems. Presumably, if society won't or can't trust AI, the odds are that AI systems will fail to get traction.


AI Sentience: How Could We Evaluate it?

#artificialintelligence

Approximately two weeks ago, Google engineer Blake Lemoine, claimed that reverberated throughout the global AI community: Google's chatbot, LaMDA, had achieved a degree of sentience akin to that of a human child. Google responded by promptly suspending the engineer, leading many members of the public to speculate as to whether the claim was true. Unfortunately, to refer to any entity as sentient requires an operationalized definition of the term that is applicable universally. Moreover, we would also need to generate a discrete, empirically motivated, theoretical framework that adequately disseminates the "Hard Problem" of consciousness (i.e., the idea that there is a set of fundamental attributes that give rise to our capacity for lived experience), which philosophers, psychologists, and neuroscientists have yet to answer. On the other hand, throughout the history of AI, the Turing Test has been popularized as the method of choice for the ascription of sentience to computational agents.