conscious ai
"We will never build a sex robot," says Mustafa Suleyman
Mustafa Suleyman, CEO of Microsoft AI, is trying to walk a fine line. On the one hand, he thinks that the industry is taking AI in a dangerous direction by building chatbots that present as human: He worries that people will be tricked into seeing life instead of lifelike behavior. In August, he published a much-discussed post on his personal blog that urged his peers to stop trying to make what he called " seemingly conscious artificial intelligence," or SCAI. On the other hand, Suleyman runs a product shop that must compete with those peers. Last week, Microsoft announced a string of updates to its Copilot chatbot, designed to boost its appeal in a crowded market in which customers can pick and choose between a pantheon of rival bots that already includes ChatGPT, Perplexity, Gemini, Claude, DeepSeek, and more.
- Asia > India (0.05)
- North America > United States > Massachusetts (0.04)
AI Consciousness and Public Perceptions: Four Futures
Fernandez, Ines, Kyosovska, Nicoleta, Luong, Jay, Mukobi, Gabriel
The discourse on risks from advanced AI systems ("AIs") typically focuses on misuse, accidents and loss of control, but the question of AIs' moral status could have negative impacts which are of comparable significance and could be realised within similar timeframes. Our paper evaluates these impacts by investigating (1) the factual question of whether future advanced AI systems will be conscious, together with (2) the epistemic question of whether future human society will broadly believe advanced AI systems to be conscious. Assuming binary responses to (1) and (2) gives rise to four possibilities: in the true positive scenario, society predominantly correctly believes that AIs are conscious; in the false positive scenario, that belief is incorrect; in the true negative scenario, society correctly believes that AIs are not conscious; and lastly, in the false negative scenario, society incorrectly believes that AIs are not conscious. The paper offers vivid vignettes of the different futures to ground the two-dimensional framework. Critically, we identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity. We evaluate each risk across the different scenarios and provide an overall qualitative risk assessment for each scenario. Our analysis suggests that the worst possibility is the wrong belief that AI is non-conscious, followed by the wrong belief that AI is conscious. The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI and instead focus efforts on reducing our current uncertainties on both the factual and epistemic questions on AI consciousness.
- Oceania > Australia (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- (15 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.92)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Law (0.93)
- (2 more...)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.99)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
Minds of machines: The great AI consciousness conundrum
Chalmers was an eminently sensible choice to speak about AI consciousness. He'd earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible. If he had been able to interact with systems like LaMDA and ChatGPT back in the '90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.25)
- North America > United States > Indiana (0.25)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.05)
Could a Large Language Model be Conscious?
There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.
- North America > United States > New York (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Is that AI have started thinking like Human? lets find out the answer
Artificial intelligence (AI) is a fast-developing science that tries to develop computers and systems that can carry out operations that ordinarily require human intellect, such as speech recognition, natural language processing, computer vision, decision-making, and more. AI has already accomplished great feats in a number of fields, including defeating human chess and go champions, producing lifelike images and texts, detecting illnesses, and operating automobiles. The topic of whether AI will ever develop human-like consciousness or the subjective experience of being aware of oneself and the outside environment is one of the most interesting and contentious ones that has remained unsolved. Consciousness is a complex and elusive phenomenon that has puzzled philosophers, scientists and ordinary people for centuries. There is no agreed-upon definition or measure of consciousness, nor a clear understanding of how it arises from the physical processes of the brain.
Can Artificial Intelligence Have Consciousness? - Dataconomy
Can artificial intelligence have consciousness? It's a question that has fascinated researchers and science fiction enthusiasts alike for decades. As AI technologies continue to advance, the possibility of creating conscious machines raises significant questions about the nature of consciousness, the future of humanity, and our relationship with technology. While some argue that AI can be capable of subjective experience and consciousness, others believe that machines are fundamentally incapable of having these experiences. So, can artificial intelligence truly have consciousness?
On the ethics of constructing conscious AI
In its pragmatic turn, the new discipline of AI ethics came to be dominated by humanity's collective fear of its creatures, as reflected in an extensive and perennially popular literary tradition. Dr. Frankenstein's monster in the novel by Mary Shelley rising against its creator; the unorthodox golem in H. Leivick's 1920 play going on a rampage; the rebellious robots of Karel \v{C}apek -- these and hundreds of other examples of the genre are the background against which the preoccupation of AI ethics with preventing robots from behaving badly towards people is best understood. In each of these three fictional cases (as well as in many others), the miserable artificial creature -- mercilessly exploited, or cornered by a murderous mob, and driven to violence in self-defense -- has its author's sympathy. In real life, with very few exceptions, things are different: theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators. The present book chapter takes up this, less commonly considered, ethical angle of AI.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (6 more...)
A New Brain Model Could Pave the Way for Conscious AI
Mila and IVADO researchers present a new neurocomputational model of the human brain that might bridge the gap in understanding AI and the biological mechanisms underlying mental disorders. A new study presents a new neurocomputational model of the human brain, which might shed light on how the brain develops complex cognitive skills and advance neural artificial intelligence research. An international team of scientists from the Institut Pasteur and Sorbonne University in Paris, the CHU Sainte-Justine, Mila – Quebec Artificial Intelligence Institute, and the University of Montreal conducted the study. The model's emphasis on the interaction between two fundamental types of learning--Hebbian learning, associated with statistical regularity (i.e., repetition), or as neuropsychologist Donald Hebb has put it, "neurons that fire together, wire together"--and reinforcement learning, associated with reward and the dopamine neurotransmitter, provides insights into the fundamental mechanisms underlying cognition. The model solves three tasks of increasing complexity across those levels, from visual recognition to cognitive manipulation of conscious percepts.
- North America > Canada > Quebec > Montreal (0.29)
- North America > United States (0.21)
Why Researchers Can't Agree on AI Consciousness
The idea of conscious artificial intelligence (AI) conjures images of machines taking over the world, but experts disagree over whether to take the concept seriously. A top AI researcher recently claimed that AI is already smarter than we think. Ilya Sutskever, the chief scientist of the OpenAI research group, tweeted that "it may be that today's large neural networks are slightly conscious." But other AI experts say that it's far too soon to determine anything of the sort. "To be conscious, an entity needs to be aware of its existence in its environment and that actions it takes will impact its future," Charles Simon, the CEO of FutureAI, told Lifewire in an email interview.