Goto

Collaborating Authors

 qualia


Perfect AI Mimicry and the Epistemology of Consciousness: A Solipsistic Dilemma

Li, Shurui

arXiv.org Artificial Intelligence

Rapid advances in artificial intelligence necessitate a re - examination of the epistemological foundations upon which we attribute consciousness. As AI systems increasingly mimic human behavior and interaction with high fidelity, the concept of a "perfect m imic" -- an entity empirically indistinguishable from a human through observation and interaction -- shifts from hypothetical to technologically plausible. This paper argues that such developments pose a fundamental challenge to the consistency of our mind - recog nition practices. Consciousness attributions rely heavily, if not exclusively, on empirical evidence derived from behavior and interaction. If a perfect mimic provides evidence identical to that of humans, any refusal to grant it equivalent epistemic statu s must invoke inaccessible factors, such as qualia, substrate requirements, or origin. Selectively invoking such factors risks a debilitating dilemma: either we undermine the rational basis for attributing consciousness to others (epistemological solipsism), or we accept inconsistent reasoning. I contend that epistemic consistency demands we ascribe the same status to empirically indistinguishable entities, regardless of metaphysical assumptions. The perfect mimic thus acts as an epistemic mirror, forcing c ritical reflection on the assumptions underlying intersubjective recognition in light of advancing AI. This analysis carries significant implications for theories of consciousness and ethical frameworks concerning artificial agents .


A physical approach to qualia and the emergence of conscious observers in qualia space

Resende, Pedro

arXiv.org Artificial Intelligence

I propose that qualia are physical because they are directly observable, and revisit the contentious link between consciousness and quantum measurements from a new perspective -- one that does not rely on observers or wave function collapse but instead treats physical measurements as fundamental in a sense resonant with Wheeler's it-from-bit. Building on a mathematical definition of measurement space in physics, I reinterpret it as a model of qualia, effectively equating the measurement problem of quantum mechanics with the hard problem of consciousness. The resulting framework falls within panpsychism, and offers potential solutions to the combination problem. Moreover, some of the mathematical structure of measurement spaces, taken for granted in physics, needs justification for qualia, suggesting that the apparent solidity of physical reality is deeply rooted in how humans process information.


The Principles of Human-like Conscious Machine

Li, Fangfang, Zhang, Xiaojie

arXiv.org Artificial Intelligence

Determining whether another system, biological or artificial, possesses phenomenal consciousness has long been a central challenge in consciousness studies. This attribution problem has become especially pressing with the rise of large language models and other advanced AI systems, where debates about "AI consciousness" implicitly rely on some criterion for deciding whether a given system is conscious. In this paper, we propose a substrate-independent, logically rigorous, and counterfeit-resistant sufficiency criterion for phenomenal consciousness. We argue that any machine satisfying this criterion should be regarded as conscious with at least the same level of confidence with which we attribute consciousness to other humans. Building on this criterion, we develop a formal framework and specify a set of operational principles that guide the design of systems capable of meeting the sufficiency condition. We further argue that machines engineered according to this framework can, in principle, realize phenomenal consciousness. As an initial validation, we show that humans themselves can be viewed as machines that satisfy this framework and its principles. If correct, this proposal carries significant implications for philosophy, cognitive science, and artificial intelligence. It offers an explanation for why certain qualia, such as the experience of red, are in principle irreducible to physical description, while simultaneously providing a general reinterpretation of human information processing. Moreover, it suggests a path toward a new paradigm of AI beyond current statistics-based approaches, potentially guiding the construction of genuinely human-like AI.


Wanting to Be Understood Explains the Meta-Problem of Consciousness

Fernando, Chrisantha, Banarse, Dylan, Osindero, Simon

arXiv.org Artificial Intelligence

Because we are highly motivated to be understood, we created public external representations -- mime, language, art -- to externalise our inner states. We argue that such external representations are a pre-condition for access consciousness, the global availability of information for reasoning. Yet the bandwidth of access consciousness is tiny compared with the richness of `raw experience', so no external representation can reproduce that richness in full. Ordinarily an explanation of experience need only let an audience `grasp' the relevant pattern, not relive the phenomenon. But our drive to be understood, and our low level sensorimotor capacities for `grasping' so rich, that the demand for an explanation of the feel of experience cannot be ``satisfactory''. That inflated epistemic demand (the preeminence of our expectation that we could be perfectly understood by another or ourselves) rather than an irreducible metaphysical gulf -- keeps the hard problem of consciousness alive. But on the plus side, it seems we will simply never give up creating new ways to communicate and think about our experiences. In this view, to be consciously aware is to strive to have one's agency understood by oneself and others.


A Mathematical Framework for Consciousness in Neural Networks

Lima, T. R.

arXiv.org Artificial Intelligence

This paper presents a novel mathematical framework for bridging the explanatory gap (Levine, 1983) between consciousness and its physical correlates. Specifically, we propose that qualia correspond to singularities in the mathematical representations of neural network topology. Crucially, we do not claim that qualia are singularities or that singularities "explain" why qualia feel as they do. Instead, we propose that singularities serve as principled, coordinate-invariant markers of points where attempts at purely quantitative description of a system's dynamics reach an in-principle limit. By integrating these formal markers of irreducibility into models of the physical correlates of consciousness, we establish a framework that recognizes qualia as phenomena inherently beyond reduction to complexity, computation, or information. This approach draws on insights from philosophy of mind, mathematics, cognitive neuroscience, and artificial intelligence (AI). It does not solve the hard problem of consciousness (Chalmers, 1995), but it advances the discourse by integrating the irreducible nature of qualia into a rigorous, physicalist framework. While primarily theoretical, these insights also open avenues for future AI and artificial consciousness (AC) research, suggesting that recognizing and harnessing irreducible topological features may be an important unlock in moving beyond incremental, scale-based improvements and toward artificial general intelligence (AGI) and AC.


Qualia and the Formal Structure of Meaning

Arsiwalla, Xerxes D.

arXiv.org Artificial Intelligence

This work explores the hypothesis that subjectively attributed meaning constitutes the phenomenal content of conscious experience. That is, phenomenal content is semantic. This form of subjective meaning manifests as an intrinsic and non-representational character of qualia. Empirically, subjective meaning is ubiquitous in conscious experiences. We point to phenomenological studies that lend evidence to support this. Furthermore, this notion of meaning closely relates to what Frege refers to as "sense", in metaphysics and philosophy of language. It also aligns with Peirce's "interpretant", in semiotics. We discuss how Frege's sense can also be extended to the raw feels of consciousness. Sense and reference both play a role in phenomenal experience. Moreover, within the context of the mind-matter relation, we provide a formalization of subjective meaning associated to one's mental representations. Identifying the precise maps between the physical and mental domains, we argue that syntactic and semantic structures transcend language, and are realized within each of these domains. Formally, meaning is a relational attribute, realized via a map that interprets syntactic structures of a formal system within an appropriate semantic space. The image of this map within the mental domain is what is relevant for experience, and thus comprises the phenomenal content of qualia. We conclude with possible implications this may have for experience-based theories of consciousness.


The purpose of qualia: What if human thinking is not (only) information processing?

Korth, Martin

arXiv.org Artificial Intelligence

Despite recent breakthroughs in the field of artificial intelligence (AI) - or more specifically machine learning (ML) algorithms for object recognition and natural language processing - it seems to be the majority view that current AI approaches are still no real match for natural intelligence (NI). More importantly, philosophers have collected a long catalogue of features which imply that NI works differently from current AI not only in a gradual sense, but in a more substantial way: NI is closely related to consciousness, intentionality and experiential features like qualia (the subjective contents of mental states) and allows for understanding (e.g., taking insight into causal relationships instead of 'blindly' relying on correlations), as well as aesthetical and ethical judgement beyond what we can put into (explicit or data-induced implicit) rules to program machines with. Additionally, Psychologists find NI to range from unconscious psychological processes to focused information processing, and from embodied and implicit cognition to 'true' agency and creativity. NI thus seems to transcend any neurobiological functionalism by operating on 'bits of meaning' instead of information in the sense of data, quite unlike both the 'good old fashioned', symbolic AI of the past, as well as the current wave of deep neural network based, 'sub-symbolic' AI, which both share the idea of thinking as (only) information processing. In the following I propose an alternative view of NI as information processing plus 'bundle pushing', discuss an example which illustrates how bundle pushing can cut information processing short, and suggest first ideas for scientific experiments in neuro-biology and information theory as further investigations.


Could AI Ever Pass the Van Gogh Test?

#artificialintelligence

That is, the Van Gogh Test for sheer creativity. This past Thursday night, Discovery Institute's tech summit COSM 2022 presented a live, in-person interview with Federico Faggin, the Italian physicist and computer engineer who co-won the prestigious Kyoto Prize in 1997 for helping develop the Intel 4004 chip. Faggin was interviewed by technology reporter Maria Teresa Cometto, who asked him to regale the audience with tales about helping to design early microchips. Eventually Faggin recounted a time when he was "studying neuroscience and biology, trying to understand how the brain works," and came upon a startling realization: And at one point I asked myself, "But wait a second, I mean these books, all this talk about electrical signals, biochemical signals, but when I taste some chocolate, I mean I have a taste. A computer, does it taste this? Does it have a sensation or a feeling for the signals that he has in his memory or in his CPU? So where are sensations and feelings coming from?" … And so I discovered what was later called the hard problem of consciousness.


The "hard problems" of Machine Consciousness

#artificialintelligence

Let's discuss the various difficulties that need to be overcome in order to create artificial intelligence that can achieve some level of consciousness, let alone a human-like level. Some of the issues include the need to create an AI that can understand and react to the complexities of experiences which stem from the real world, as well as the need to create AI that can understand and replicate the logical workings of the human mind. Sentience is the ability to feel, perceive, or experience subjectively. Machines can currently only experience the world objectively; they cannot feel or perceive subjectively. To create a conscious machine, we need to understand sentience and how to create it.


On the independence between phenomenal consciousness and computational intelligence

Merchán, Eduardo C. Garrido, Lumbreras, Sara

arXiv.org Artificial Intelligence

Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following the analogy of Russell, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.