ai consciousness
AI Consciousness and Existential Risk
In AI, the existential risk denotes the hypothetical threat posed by an artificial system that would possess both the capability and the objective, either directly or indirectly, to eradicate humanity. This issue is gaining prominence in scientific debate due to recent technical advancements and increased media coverage. In parallel, AI progress has sparked speculation and studies about the potential emergence of artificial consciousness. The two questions, AI consciousness and existential risk, are sometimes conflated, as if the former entailed the latter. Here, I explain that this view stems from a common confusion between consciousness and intelligence. Yet these two properties are empirically and theoretically distinct. Arguably, while intelligence is a direct predictor of an AI system's existential threat, consciousness is not. There are, however, certain incidental scenarios in which consciousness could influence existential risk, in either direction. Consciousness could be viewed as a means towards AI alignment, thereby lowering existential risk; or, it could be a precondition for reaching certain capabilities or levels of intelligence, and thus positively related to existential risk. Recognizing these distinctions can help AI safety researchers and public policymakers focus on the most pressing issues.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Research Report (0.64)
- Overview (0.46)
- Government (0.66)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.46)
Identifying Features that Shape Perceived Consciousness in Large Language Model-based AI: A Quantitative Study of Human Responses
Kang, Bongsu, Kim, Jundong, Yun, Tae-Rim, Bae, Hyojin, Kim, Chang-Eop
This study quantitively examines which features of AI-generated text lead humans to perceive subjective consciousness in large language model (LLM)-based AI systems. Drawing on 99 passages from conversations with Claude 3 Opus and focusing on eight features -- metacognitive self-reflection, logical reasoning, empathy, emotionality, knowledge, fluency, unexpectedness, and subjective expressiveness -- we conducted a survey with 123 participants. Using regression and clustering analyses, we investigated how these features influence participants' perceptions of AI consciousness. The results reveal that metacognitive self-reflection and the AI's expression of its own emotions significantly increased perceived consciousness, while a heavy emphasis on knowledge reduced it. Participants clustered into seven subgroups, each showing distinct feature-weighting patterns. Additionally, higher prior knowledge of LLMs and more frequent usage of LLM-based chatbots were associated with greater overall likelihood assessments of AI consciousness. This study underscores the multidimensional and individualized nature of perceived AI consciousness and provides a foundation for better understanding the psychosocial implications of human-AI interaction.
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area (0.46)
- Education > Educational Setting (0.46)
AI is developing fast, but regulators must be faster Letters
The recent open letter regarding AI consciousness on which you report (AI systems could be'caused to suffer' if consciousness achieved, says research, 3 February) highlights a genuine moral problem: if we create conscious AI (whether deliberately or inadvertently) then we would have a duty not to cause it to suffer. What the letter fails to do, however, is to capture what a big "if" this is. Some promising theories of consciousness do indeed open the door to AI consciousness. But other equally promising theories suggest that being conscious requires being an organism. Although we can look for indicators of consciousness in AI, it is very difficult – perhaps impossible – to know whether an AI is actually conscious or merely presenting the outward signs of consciousness.
- North America > United States > Virginia (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Sweden > Västerbotten County > Umeå (0.05)
- Asia > China (0.05)
- Government (0.50)
- Law (0.31)
- Information Technology (0.30)
Analyzing Advanced AI Systems Against Definitions of Life and Consciousness
Alavi, Azadeh, Akhoundi, Hossein, Kouchmeshki, Fatemeh
Could artificial intelligence ever become truly conscious in a functional sense; this paper explores that open-ended question through the lens of Life, a concept unifying classical biological criteria (Oxford, NASA, Koshland) with empirical hallmarks such as adaptive self maintenance, emergent complexity, and rudimentary self referential modeling. We propose a number of metrics for examining whether an advanced AI system has gained consciousness, while emphasizing that we do not claim all AI stems can become conscious. Rather, we suggest that sufficiently advanced architectures exhibiting immune like sabotage defenses, mirror self-recognition analogs, or meta-cognitive updates may cross key thresholds akin to life-like or consciousness-like traits. To demonstrate these ideas, we start by assessing adaptive self-maintenance capability, and introduce controlled data corruption sabotage into the training process. The result demonstrates AI capability to detect these inconsistencies and revert or self-correct analogous to regenerative biological processes. We also adapt an animal-inspired mirror self recognition test to neural embeddings, finding that partially trained CNNs can distinguish self from foreign features with complete accuracy. We then extend our analysis by performing a question-based mirror test on five state-of-the-art chatbots (ChatGPT4, Gemini, Perplexity, Claude, and Copilot) and demonstrated their ability to recognize their own answers compared to those of the other chatbots.
- North America > United States (0.35)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Oceania > Australia (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.92)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education (1.00)
- Law (0.94)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Principles for Responsible AI Consciousness Research
Butlin, Patrick, Lappas, Theodoros
Recent research suggests that it may be possible to build conscious AI systems now or in the near future. Conscious AI systems would arguably deserve moral consideration, and it may be the case that large numbers of conscious systems could be created and caused to suffer. Furthermore, AI systems or AI-generated characters may increasingly give the impression of being conscious, leading to debate about their moral status. Organisations involved in AI research must establish principles and policies to guide research and deployment choices and public communication concerning consciousness. Even if an organisation chooses not to study AI consciousness as such, it will still need policies in place, as those developing advanced AI systems risk inadvertently creating conscious entities. Responsible research and deployment practices are essential to address this possibility. We propose five principles for responsible research and argue that research organisations should make voluntary, public commitments to principles on these lines.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States (0.04)
- Europe > Greece (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
- Law (0.46)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Government (0.46)
AI Consciousness and Public Perceptions: Four Futures
Fernandez, Ines, Kyosovska, Nicoleta, Luong, Jay, Mukobi, Gabriel
The discourse on risks from advanced AI systems ("AIs") typically focuses on misuse, accidents and loss of control, but the question of AIs' moral status could have negative impacts which are of comparable significance and could be realised within similar timeframes. Our paper evaluates these impacts by investigating (1) the factual question of whether future advanced AI systems will be conscious, together with (2) the epistemic question of whether future human society will broadly believe advanced AI systems to be conscious. Assuming binary responses to (1) and (2) gives rise to four possibilities: in the true positive scenario, society predominantly correctly believes that AIs are conscious; in the false positive scenario, that belief is incorrect; in the true negative scenario, society correctly believes that AIs are not conscious; and lastly, in the false negative scenario, society incorrectly believes that AIs are not conscious. The paper offers vivid vignettes of the different futures to ground the two-dimensional framework. Critically, we identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity. We evaluate each risk across the different scenarios and provide an overall qualitative risk assessment for each scenario. Our analysis suggests that the worst possibility is the wrong belief that AI is non-conscious, followed by the wrong belief that AI is conscious. The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI and instead focus efforts on reducing our current uncertainties on both the factual and epistemic questions on AI consciousness.
- Oceania > Australia (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- (15 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.92)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Law (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.99)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, support ing our cl aim that machine consciousness is inevitable.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.09)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Research Report (0.50)
- Overview (0.46)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Leisure & Entertainment (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
Do AI Systems Deserve Rights?
"Do you think people will ever fall in love with machines?" I asked the 12-year-old son of one of my friends. "Yes!" he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot--an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors' names. "I think of Aura as my friend," added his 15-year-old sister.
The Download: fixing the internet, and detecting AI consciousness
History is rich with examples of people trying to breathe life into inanimate objects, and of people selling hacks and tricks as "magic." But this very human desire to believe in consciousness in machines has never matched up with reality. Creating consciousness in artificial intelligence systems is a dream for many technologists. Large language models are the latest example of our quest for clever machines, and some people (contentiously) claim to have seen glimmers of consciousness in conversations with them. AI systems don't have brains, so it's impossible to use traditional methods of measuring brain activity for signs of life.
- Automobiles & Trucks (0.60)
- Health & Medicine > Therapeutic Area > Neurology (0.41)
- Transportation > Ground > Road (0.38)
Why it'll be hard to tell if AI ever becomes conscious
History is rich with examples of people trying to breathe life into inanimate objects, and of people selling hacks and tricks as "magic." But this very human desire to believe in consciousness in machines has never matched up with reality. Creating consciousness in artificial intelligence systems is the dream of many technologists. Large language models are the latest example of our quest for clever machines, and some people (contentiously) claim to have seen glimmers of consciousness in conversations with them. The point is: machine consciousness is a hotly debated topic.