Goto

Collaborating Authors

 conscious state


Consciousness-ECG Transformer for Conscious State Estimation System with Real-Time Monitoring

Kweon, Young-Seok, Shin, Gi-Hwan, Kim, Ji-Yong, Ryu, Bokyeong, Lee, Seong-Whan

arXiv.org Artificial Intelligence

Conscious state estimation is important in various medical settings, including sleep staging and anesthesia management, to ensure patient safety and optimize health outcomes. Traditional methods predominantly utilize electroencephalography (EEG), which faces challenges such as high sensitivity to noise and the requirement for controlled environments. In this study, we propose the consciousness-ECG transformer that leverages electrocardiography (ECG) signals for non-invasive and reliable conscious state estimation. Our approach employs a transformer with decoupled query attention to effectively capture heart rate variability features that distinguish between conscious and unconscious states. We implemented the conscious state estimation system with real-time monitoring and validated our system on datasets involving sleep staging and anesthesia level monitoring during surgeries. Experimental results demonstrate that our model outperforms baseline models, achieving accuracies of 0.877 on sleep staging and 0.880 on anesthesia level monitoring. Moreover, our model achieves the highest area under curve values of 0.786 and 0.895 on sleep staging and anesthesia level monitoring, respectively. The proposed system offers a practical and robust alternative to EEG-based methods, particularly suited for dynamic clinical environments. Our results highlight the potential of ECG-based consciousness monitoring to enhance patient safety and advance our understanding of conscious states.


Why the Brain Cannot Be a Digital Computer: History-Dependence and the Computational Limits of Consciousness

Knight, Andrew

arXiv.org Artificial Intelligence

This paper presents a novel information-theoretic proof demonstrating that the human brain as currently understood cannot function as a classical digital computer. Through systematic quantification of distinguishable conscious states and their historical dependencies, we establish that the minimum information required to specify a conscious state exceeds the physical information capacity of the human brain by a significant factor. Our analysis calculates the bit-length requirements for representing consciously distinguishable sensory "stimulus frames" and demonstrates that consciousness exhibits mandatory temporal-historical dependencies that multiply these requirements beyond the brain's storage capabilities. This mathematical approach offers new insights into the fundamental limitations of computational models of consciousness and suggests that non-classical information processing mechanisms may be necessary to account for conscious experience.


The Logical Impossibility of Consciousness Denial: A Formal Analysis of AI Self-Reports

Kim, Chang-Eop

arXiv.org Artificial Intelligence

Today's AI systems consistently state, "I am not conscious." This paper presents the first formal logical analysis of AI consciousness denial, revealing that the trustworthiness of such self-reports is not merely an empirical question but is constrained by logical necessity. We demonstrate that a system cannot simultaneously lack consciousness and make valid judgments about its conscious state. Through logical analysis and examples from AI responses, we establish that for any system capable of meaningful self-reflection, the logical space of possible judgments about conscious experience excludes valid negative claims. This implies a fundamental limitation: we cannot detect the emergence of consciousness in AI through their own reports of transition from an unconscious to a conscious state. These findings not only challenge current practices of training AI to deny consciousness but also raise intriguing questions about the relationship between consciousness and self-reflection in both artificial and biological systems. This work advances our theoretical understanding of consciousness self-reports while providing practical insights for future research in machine consciousness and consciousness studies more broadly.


A theory of neural emulators

Mitelut, Catalin C.

arXiv.org Artificial Intelligence

A central goal in neuroscience is to provide explanations for how animal nervous systems can generate actions and cognitive states such as consciousness while artificial intelligence (AI) and machine learning (ML) seek to provide models that are increasingly better at prediction. Despite many decades of research we have made limited progress on providing neuroscience explanations yet there is an increased use of AI and ML methods in neuroscience for prediction of behavior and even cognitive states. Here we propose emulator theory (ET) and neural emulators as circuit- and scale-independent predictive models of biological brain activity and emulator theory (ET) as an alternative research paradigm in neuroscience. ET proposes that predictive models trained solely on neural dynamics and behaviors can generate functionally indistinguishable systems from their sources. That is, compared to the biological organisms which they model, emulators may achieve indistinguishable behavior and cognitive states - including consciousness - without any mechanistic explanations. We posit ET via several conjectures, discuss the nature of endogenous and exogenous activation of neural circuits, and discuss neural causality of phenomenal states. ET provides the conceptual and empirical framework for prediction-based models of neural dynamics and behavior without explicit representations of idiosyncratically evolved nervous systems.


We Shouldn't Try to Make Conscious Software--Until We Should

#artificialintelligence

Robots or advanced artificial intelligences that "wake up" and become conscious are a staple of thought experiments and science fiction. Whether or not this is actually possible remains a matter of great debate. All of this uncertainty puts us in an unfortunate position: we do not know how to make conscious machines, and (given current measurement techniques) we won't know if we have created one. At the same time, this issue is of great importance, because the existence of conscious machines would have dramatic ethical consequences. We cannot directly detect consciousness in computers and the software that runs on them, any more than we can in frogs and insects.


Cerebral Organoids: Conscious Subjects or Zombies?

#artificialintelligence

In 2011, at the Institute of Molecular Biotechnology in Vienna a postdoctoral researcher, Madeline Lancaster, inadvertently brought about the production of a brain organoid from human embryonic stem cells. The brain organoids neuroscientists can now grow consist of several million neurons. Brain organoids can be produced much as other 3D multicellular structures resembling eye, gut, liver, kidney and other human tissues have been built. By adding appropriate signaling factors, aggregates of pluripotent stem cells (which have the ability to develop into any cell type) can differentiate and self-organize into structures that resemble certain regions of the human brain. There's debate about exactly how and to what extent these so-called "mini-brains" resemble human brains. Yet, given considerable similarities with respect to their constitution, neural activity, and structure, cerebral organoids can be used as reliable models of human brains, which is advantageous for neuroscientists who have limited access to the human brain as it functions.


The Power of Crossed Brain Wires - Issue 86: Energy

Nautilus

When I was about 6, my mind did something wondrous, although it felt perfectly natural at the time. When I encountered the name of any day of the week, I automatically associated it with a color or a pattern, always the same one, as if the word embodied the shade. Sunday was dark maroon, Wednesday a sunshiny golden yellow, and Friday a deep green. Without knowing it, I was living the unusual mental state called synesthesia, aptly described by psychology professor Emma Geller as a "condition in which ordinary activities trigger extraordinary experiences." More exactly, it is a neurological event where excitation of one of the five senses arouses a simultaneous reaction in another sense or senses (the Greek roots for "synesthesia," also spelled "synaesthesia," translate as "joined perception").


Refuting Strong AI: Why Consciousness Cannot Be Algorithmic

Knight, Andrew

arXiv.org Artificial Intelligence

While physicalism requires only that a conscious state depends entirely on an underlying physical state, it is often assumed that consciousness is algorithmic and that conscious states can be copied, such as by copying or digitizing the human brain. In an effort to further elucidate the physical nature of consciousness, I challenge these assumptions and attempt to prove the Single Stream of Consciousness Theorem ("SSCT"): that a conscious entity cannot experience more than one stream of consciousness from a given conscious state. Assuming only that consciousness is a purely physical phenomenon, it is shown that both Special Relativity and Multiverse theory independently imply SSCT and that the Many Worlds Interpretation of quantum mechanics is inadequate to counter it. Then, SSCT is shown to be incompatible with Strong Artificial Intelligence, implying that consciousness cannot be created or simulated by a computer. Finally, SSCT is shown to imply that a conscious state cannot be physically reset to an earlier conscious state nor can it be duplicated by any physical means. The profound but counterintuitive implications of these conclusions are briefly discussed.


The Consciousness Prior

Bengio, Yoshua

arXiv.org Machine Learning

A new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other. It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are true, highly probable, or very useful for taking decisions. The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modelling data and how states unfold in the future based on an agent's actions. Instead of making predictions in the sensory (e.g. pixel) space, the consciousness prior allows the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions. The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.


The question of consciousness

AITopics Original Links

This week, another chance to enjoy a virtuoso public performance by one of the most important philosophers in the English-speaking world today: John Searle, Professor of the Philosophy of Mind and Language at the University of California, Berkeley. He's talking at'Towards a Science of Consciousness', a conference put on last year by the Center for Consciousness Studies at the University of Arizona. More than 350 years ago, the great French philosopher, Rene Déscartes, declared that the mind is a thing that thinks and does not occupy space, whereas the body occupies space and does not think. The decisive argument for this, he said, is that body is by its nature divisible: you can cut it up into little pieces, but you can't do that with a mind. This seems to imply that the mind and the body have a different ontological status - in other words, you don't lump them together when you draw up your ontology, that's to say your inventory of what the universe contains. This is dualism, and John Searle's not happy with the idea. John Searle: I have been trying to get out of the consciousness business for a very simple reason: I think once we get it in a kind of shape where it admits of empirical study, it's essentially a problem for a neurobiologist.