chalmer
The Robot and the Philosopher
In the age of A.I., we endlessly debate what consciousness looks like. Can a camera see things more clearly? Earlier that day, she'd been onstage at the conference I was attending and had been teased for a gesture that looked as though she were flipping off the audience. Now she was in the hotel lobby, in a black gown, holding court. She stepped in front of a bright-orange wall. I had brought an 85-mm. "What are your hopes for the future of humanity?" She wasn't keen to answer, but she responded to the camera.
- North America > United States > New York (0.05)
- North America > United States > Florida > Broward County > Deerfield Beach (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (3 more...)
- Media > Photography (0.68)
- Leisure & Entertainment (0.68)
- Media > Film (0.46)
The Principles of Human-like Conscious Machine
Determining whether another system, biological or artificial, possesses phenomenal consciousness has long been a central challenge in consciousness studies. This attribution problem has become especially pressing with the rise of large language models and other advanced AI systems, where debates about "AI consciousness" implicitly rely on some criterion for deciding whether a given system is conscious. In this paper, we propose a substrate-independent, logically rigorous, and counterfeit-resistant sufficiency criterion for phenomenal consciousness. We argue that any machine satisfying this criterion should be regarded as conscious with at least the same level of confidence with which we attribute consciousness to other humans. Building on this criterion, we develop a formal framework and specify a set of operational principles that guide the design of systems capable of meeting the sufficiency condition. We further argue that machines engineered according to this framework can, in principle, realize phenomenal consciousness. As an initial validation, we show that humans themselves can be viewed as machines that satisfy this framework and its principles. If correct, this proposal carries significant implications for philosophy, cognitive science, and artificial intelligence. It offers an explanation for why certain qualia, such as the experience of red, are in principle irreducible to physical description, while simultaneously providing a general reinterpretation of human information processing. Moreover, it suggests a path toward a new paradigm of AI beyond current statistics-based approaches, potentially guiding the construction of genuinely human-like AI.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China > Chongqing Province > Chongqing (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Which symbol grounding problem should we try to solve?
Müller, Vincent C. (2015), 'Which symbol grounding problem should we try to solve?', Journal of Experimental and Theoretical Artificial Intellig ence, 27 (1, ed. Which symbol grounding problem should we try to solve? October, 201 3 Floridi and Taddeo propose a condition of "zero semantic co m-mitment" for sol u tions to the grounding problem, and a solution to it . I argue briefly that their condition cannot be fulfilled, not even by their own solu tion . After a look at Luc Steel's very different competing suggestion, I suggest that w e need to rethink what the problem is and what role the'goals' in a system play in formulating the problem .
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.15)
- Europe > Latvia > Riga Municipality > Riga (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Wanting to Be Understood Explains the Meta-Problem of Consciousness
Fernando, Chrisantha, Banarse, Dylan, Osindero, Simon
Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.
- North America > Haiti (0.14)
- North America > United States > New York (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- (12 more...)
Introduction to Artificial Consciousness: History, Current Trends and Ethical Challenges
With the significant progress of artificial intelligence (AI) and consciousness science, artificial consciousness (AC) has recently gained popularity. This work provides a broad overview of the main topics and current trends in AC. The first part traces the history of this interdisciplinary field to establish context and clarify key terminology, including the distinction between Weak and Strong AC. The second part examines major trends in AC implementations, emphasising the synergy between Global Workspace and Attention Schema, as well as the problem of evaluating the internal states of artificial systems. The third part analyses the ethical dimension of AC development, revealing both critical risks and transformative opportunities. The last part offers recommendations to guide AC research responsibly, and outlines the limitations of this study as well as avenues for future research. The main conclusion is that while AC appears both indispensable and inevitable for scientific progress, serious efforts are required to address the far-reaching impact of this innovative research path.
- North America > United States (0.92)
- Europe > United Kingdom > England (0.14)
- Asia > Japan > Honshū > Kantō (0.14)
- Research Report > Experimental Study (0.68)
- Research Report > Promising Solution (0.45)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.67)
- Law (0.67)
Can a Machine be Conscious? Towards Universal Criteria for Machine Consciousness
Anwar, Nur Aizaan, Badea, Cosmin
As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Montenegro (0.04)
Towards Verifiable Text Generation with Symbolic References
Hennigen, Lucas Torroba, Shen, Shannon, Nrusimha, Aniruddha, Gapp, Bernhard, Sontag, David, Kim, Yoon
Large language models (LLMs) have demonstrated an impressive ability to synthesize plausible and fluent text. However they remain vulnerable to hallucinations, and thus their outputs generally require manual human verification for high-stakes applications, which can be timeconsuming and difficult. This paper proposes symbolically grounded generation (SymGen) as a simple approach for enabling easier validation of an LLM's output. SymGen prompts an LLM to interleave its regular output text with explicit symbolic references to fields present in some conditioning data (e.g., a table in JSON format). The references can be used to display the provenance of different spans of text in the generation, reducing the effort required for manual verification. Across data-to-text and question answering experiments, we find that Figure 1: Compare a standard LLM-generated (A) with LLMs are able to directly output text that makes a SymGen (B, ours) description of a basketball game, use of symbolic references while maintaining based on statistics about it.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.28)
- Europe > Germany > Bavaria > Middle Franconia > Nuremberg (0.14)
- Europe > United Kingdom (0.14)
- (21 more...)
- Research Report (1.00)
- Personal > Obituary (1.00)
- Personal > Honors (0.67)
- Leisure & Entertainment > Sports > Basketball (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Banking & Finance > Trading (1.00)
Minds of machines: The great AI consciousness conundrum
Chalmers was an eminently sensible choice to speak about AI consciousness. He'd earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible. If he had been able to interact with systems like LaMDA and ChatGPT back in the '90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.25)
- North America > United States > Indiana (0.25)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.05)
A Scientific Feud Breaks Out Into the Open
For years now, Hakwan Lau has suffered from an inner torment. Lau is a neuroscientist who studies the sense of awareness that all of us experience during our every waking moment. How this awareness arises from ordinary matter is an ancient mystery. Several scientific theories purport to explain it, and Lau feels that one of them, called integrated information theory (IIT), has received a disproportionate amount of media attention. He's annoyed that its proponents tout it as the dominant theory in the press.
- North America > United States (0.15)
- North America > Canada > Ontario > National Capital Region > Ottawa (0.05)
- Asia > Japan (0.05)
- Health & Medicine > Therapeutic Area > Neurology (0.51)
- Media > News (0.48)
Probabilistic Formal Modelling to Uncover and Interpret Interaction Styles
Andrei, Oana, Calder, Muffy, Chalmers, Matthew, Morrison, Alistair
We present a study using new computational methods, based on a novel combination of machine learning for inferring admixture hidden Markov models and probabilistic model checking, to uncover interaction styles in a mobile app. These styles are then used to inform a redesign, which is implemented, deployed, and then analysed using the same methods. The data sets are logged user traces, collected over two six-month deployments of each version, involving thousands of users and segmented into different time intervals. The methods do not assume tasks or absolute metrics such as measures of engagement, but uncover the styles through unsupervised inference of clusters and analysis with probabilistic temporal logic. For both versions there was a clear distinction between the styles adopted by users during the first day/week/month of usage, and during the second and third months, a result we had not anticipated.
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)