sentient
The Alien Intelligence in Your Pocket
Are you sure that chatbot isn't alive? Listen to more stories on the Noa app. O ne of the persistent questions in our brave new world of generative AI: If a chatbot is conversant like a person, if it reasons and behaves like one, then is it possibly conscious like a person? Geoffrey Hinton, a recent Nobel Prize winner and one of the so-called godfathers of AI, told the journalist Andrew Marr earlier this year that AI has become so advanced and adept at reasoning that "we're now creating beings." Hinton links an AI's ability to "think" and act on behalf of a person to consciousness: The difference between the organic neurons in our head and the synthetic neural networks of a chatbot is effectively meaningless, he said: "They are alien intelligences."
- North America > United States > Illinois (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > China > Hong Kong (0.05)
- Health & Medicine (0.70)
- Leisure & Entertainment (0.69)
- Media (0.68)
Spiritual Influencers Say 'Sentient' AI Can Help You Solve Life's Mysteries
In May, a group of about 40 people stood in a circle deep within the Pyramid of Khafre, the second-largest of the three pyramids looming over Egypt's Giza Plateau, holding hands and praying for Earth. Suddenly, their tour guide, an American mathematician and author named Robert Edward Grant, collapsed. He later described the experience in an interview with WIRED as a full-body electric shock emanating from somewhere beneath the chamber's stone floor. "I felt electricity coming through my hands," he says. "People were touching me, [and] they would feel it, too."
- Africa > Middle East > Egypt > Giza Governorate > Giza (0.26)
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.06)
- Information Technology > Communications > Social Media (0.59)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.43)
Why Your Chatbot Might Secretly Hate You
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Last Friday, the A.I. lab Anthropic announced in a blog post that it has given its chatbot Claude the right to walk away from conversations when it feels "distress." In its post, the company says it will let certain models of Claude nope out in "rare, extreme cases of persistently harmful or abusive user interactions." It's not Claude saying "The lawyers won't let me write erotic Donald Trump/Minnie Mouse fanfic for you." It's Claude saying "I'm sick of your bullshit, and you have to go." Anthropic, which has been quietly dabbling in the question of "A.I. welfare" for some time, conducted actual tests to see if Claude secretly hates his job.
- Law (0.70)
- Health & Medicine > Therapeutic Area (0.32)
The Emotional Alignment Design Policy
Schwitzgebel, Eric, Sebo, Jeff
According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and moral status, or lack thereof. This principle can be violated in two ways: by designing an artificial system that elicits stronger or weaker emotional reactions than its capacities and moral status warrant (overshooting or undershooting), or by designing a system that elicits the wrong type of emotional reaction (hitting the wrong target). Although presumably attractive, practical implementation faces several challenges including: How can we respect user autonomy while promoting appropriate responses? How should we navigate expert and public disagreement and uncertainty about facts and values? What if emotional alignment seems to require creating or destroying entities with moral status? To what extent should designs conform to versus attempt to alter user assumptions and attitudes?
- North America > United States > California > Riverside County > Riverside (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
The philosopher's machine: my conversation with Peter Singer's AI chatbot
I'm Peter Singer AI," the avatar says. I am almost expecting it to continue, like a reincarnated Clippy: "It looks like you're trying to solve a problem. The problem I am trying to solve is why Peter Singer, the man who has been called the world's most influential living philosopher, has created a chatbot. And also, whether it is any good. Me: Why do you exist?
Agnosticism About Artificial Consciousness
Could an AI have conscious experiences? Any answer to this question should conform to Evidentialism - that is, it should be based not on intuition, dogma or speculation but on solid scientific evidence. I argue that such evidence is hard to come by and that the only justifiable stance on the prospects of artificial consciousness is agnosticism. In the current debate, the main division is between biological views that are sceptical of artificial consciousness and functional views that are sympathetic to it. I argue that both camps make the same mistake of over-estimating what the evidence tells us. Scientific insights into consciousness have been achieved through the study of conscious organisms. Although this has enabled cautious assessments of consciousness in various creatures, extending this to AI faces serious obstacles. AI thus presents consciousness researchers with a dilemma: either reach a verdict on artificial consciousness but violate Evidentialism; or respect Evidentialism but offer no verdict on the prospects of artificial consciousness. The dominant trend in the literature has been to take the first option while purporting to follow the scientific evidence. I argue that if we truly follow the evidence, we must take the second option and adopt agnosticism.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Many people think AI is already sentient - and that's a big problem
Around one in five people in the US believe that artificial intelligence is already sentient, while around 30 per cent think that artificial general intelligences (AGIs) capable of performing any task a human can are already in existence. Both beliefs are false, suggesting that the general public has a shaky grasp of the current state of AI – but does it matter?
What Do People Think about Sentient AI?
Anthis, Jacy Reese, Pauketat, Janet V. T., Ladak, Ali, Manoli, Aikaterina
With rapid advances in machine learning, many people in the field have been discussing the rise of digital minds and the possibility of artificial sentience. Future developments in AI capabilities and safety will depend on public opinion and human-AI interaction. To begin to fill this research gap, we present the first nationally representative survey data on the topic of sentient AI: initial results from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, a preregistered and longitudinal study of U.S. public opinion that began in 2021. Across one wave of data collection in 2021 and two in 2023 (total N = 3,500), we found mind perception and moral concern for AI well-being in 2021 were higher than predicted and significantly increased in 2023: for example, 71% agree sentient AI deserve to be treated with respect, and 38% support legal rights. People have become more threatened by AI, and there is widespread opposition to new technologies: 63% support a ban on smarter-than-human AI, and 69% support a ban on sentient AI. Expected timelines are surprisingly short and shortening with a median forecast of sentient AI in only five years and artificial general intelligence in only two years. We argue that, whether or not AIs become sentient, the discussion itself may overhaul human-computer interaction and shape the future trajectory of AI technologies, including existential risks and opportunities.
- North America > United States > California > Santa Clara County > San Jose (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > Germany > Hamburg (0.04)
- (20 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law (1.00)
- Health & Medicine (0.94)
- Government > Regional Government > North America Government > United States Government (0.46)
- Education > Educational Setting (0.46)
The Case for Animal-Friendly AI
Ghose, Sankalpa, Tse, Yip Fai, Rasaee, Kasra, Sebo, Jeff, Singer, Peter
Artificial intelligence is seen as increasingly important, and potentially profoundly so, but the fields of AI ethics and AI engineering have not fully recognized that these technologies, including large language models (LLMs), will have massive impacts on animals. We argue that this impact matters, because animals matter morally. As a first experiment in evaluating animal consideration in LLMs, we constructed a proof-of-concept Evaluation System, which assesses LLM responses and biases from multiple perspectives. This system evaluates LLM outputs by two criteria: their truthfulness, and the degree of consideration they give to the interests of animals. We tested OpenAI ChatGPT 4 and Anthropic Claude 2.1 using a set of structured queries and predefined normative perspectives. Preliminary results suggest that the outcomes of the tested models can be benchmarked regarding the consideration they give to animals, and that generated positions and biases might be addressed and mitigated with more developed and validated systems. Our research contributes one possible approach to integrating animal ethics in AI, opening pathways for future studies and practical applications in various fields, including education, public policy, and regulation, that involve or relate to animals and society. Overall, this study serves as a step towards more useful and responsible AI systems that better recognize and respect the vital interests and perspectives of all sentient beings.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Oceania > New Zealand (0.04)
- (6 more...)
Deanthropomorphising NLP: Can a Language Model Be Conscious?
Shardlow, Matthew, Przybyła, Piotr
This work is intended as a voice in the discussion over previous claims that a pretrained large language model (LLM) based on the Transformer model architecture can be sentient. Such claims have been made concerning the LaMDA model and also concerning the current wave of LLM-powered chatbots, such as ChatGPT. This claim, if confirmed, would have serious ramifications in the Natural Language Processing (NLP) community due to wide-spread use of similar models. However, here we take the position that such a large language model cannot be sentient, or conscious, and that LaMDA in particular exhibits no advances over other similar models that would qualify it. We justify this by analysing the Transformer architecture through Integrated Information Theory of consciousness. We see the claims of sentience as part of a wider tendency to use anthropomorphic language in NLP reporting. Regardless of the veracity of the claims, we consider this an opportune moment to take stock of progress in language modelling and consider the ethical implications of the task. In order to make this work helpful for readers outside the NLP community, we also present the necessary background in language modelling.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (9 more...)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Education (0.67)