Philosophy
Scientist says human consciousness comes from another dimension
A baffling new theory to explain human consciousness has suggested it comes from hidden dimensions and is not just brain activity. A physicist claimed that we plug in to these invisible planes of the universe when making art, practicing science, pondering philosophy or dreaming, and this could explain the phenomenon that has evaded scientific understanding for centuries. Michael Pravica, a professor of physics at the University of Nevada, Las Vegas, has based the wild idea on hyperdimensionality, the idea that the universe is made up of more dimensions than just the four we perceive: height, length width and time. But his theory is highly controversial, with one scientist saying that the cornerstone of Pravica's theory'borders on science fiction.' 'The sheer fact that we can conceive of higher dimensions than four within our mind, within our mathematics, is a gift... it's something that transcends biology,' Pravica told Popular Mechanics. Scientists have been attempting to explain human consciousness and its origins for hundreds of years - and the theories run the gamut.
Scientists say they may have discovered origin of consciousness - and it's a theory popularized by Joe Rogan
The birth of human consciousness may have truly been magic. Scientists have claimed that the consumption of the fungi psilocybin, also known as'magic mushrooms,' influenced pre-human hominids' brains six million years ago. They analyzed dozens of studies involving psilocybin and consciousness, finding the fungi increased connectivity between networks in the frontal brain region associated with expressive language, decision-making and memory. These'significant neurological and psychological effects' may have been the catalase ancient ancestors to interact with each other and the environment - spurring consciousness among our species. The idea that magic mushrooms sparked the pivotal point in humans has been touted by podcaster Joe Rogan, who has referenced the'Stoned Ape Theory' on his show multiple times.
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, support ing our cl aim that machine consciousness is inevitable.
Consciousness defined: requirements for biological and artificial general intelligence
Consciousness is notoriously hard to define with objective terms. An objective definition of consciousness is critically needed so that we might accurately understand how consciousness and resultant choice behaviour may arise in biological or artificial systems. Many theories have integrated neurobiological and psychological research to explain how consciousness might arise, but few, if any, outline what is fundamentally required to generate consciousness. To identify such requirements, I examine current theories of consciousness and corresponding scientific research to generate a new definition of consciousness from first principles. Critically, consciousness is the apparatus that provides the ability to make decisions, but it is not defined by the decision itself. As such, a definition of consciousness does not require choice behaviour or an explicit awareness of temporality despite both being well-characterised outcomes of conscious thought. Rather, requirements for consciousness include: at least some capability for perception, a memory for the storage of such perceptual information which in turn provides a framework for an imagination with which a sense of self can be capable of making decisions based on possible and desired futures. Thought experiments and observable neurological phenomena demonstrate that these components are fundamentally required of consciousness, whereby the loss of any one component removes the capability for conscious thought. Identifying these requirements provides a new definition for consciousness by which we can objectively determine consciousness in any conceivable agent, such as non-human animals and artificially intelligent systems. Introduction The study of consciousness requires the integration of many fields of research including but not limited to neuroscience, psychology, philosophy, physics and artificial general intelligence (AGI). Definitions of consciousness remain disconnected from the fundamental principles required to generate it. For example, common mistakes include conflating "awareness" with consciousness, likely due to the way the phrase "to be conscious of something" is synonymous with an awareness of that "something". As Crick and Koch wrote in their paper titled "Towards a neurobiological theory of consciousness," they deliberately avoid defining consciousness by explaining that "it is better to avoid a precise definition of consciousness because of the dangers of premature definition." After more than three decades, it is past time we generate a precise definition of consciousness and its requirements that are free of subjective biases.
Neuromorphic Correlates of Artificial Consciousness
The concept of neural correlates of consciousness (NCC), which suggests that specific neural activities are linked to conscious experiences, has gained widespread acceptance. This acceptance is based on a wealth of evidence from experimental studies, brain imaging techniques such as fMRI and EEG, and theoretical frameworks like integrated information theory (IIT) within neuroscience and the philosophy of mind. This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations. It proposes the Neuromorphic Correlates of Artificial Consciousness (NCAC) as a theoretical framework. While the debate on artificial consciousness remains contentious due to our incomplete grasp of consciousness, this work may raise eyebrows and invite criticism. Nevertheless, this optimistic and forward-thinking approach is fueled by insights from the Human Brain Project, advancements in brain imaging like EEG and fMRI, and recent strides in AI and computing, including quantum and neuromorphic designs. Additionally, this paper outlines how machine learning can play a role in crafting artificial consciousness, aiming to realise machine consciousness and awareness in the future.
Can a Machine be Conscious? Towards Universal Criteria for Machine Consciousness
Anwar, Nur Aizaan, Badea, Cosmin
As artificially intelligent systems become more anthropomorphic and pervasive, and their potential impact on humanity more urgent, discussions about the possibility of machine consciousness have significantly intensified, and it is sometimes seen as 'the holy grail'. Many concerns have been voiced about the ramifications of creating an artificial conscious entity. This is compounded by a marked lack of consensus around what constitutes consciousness and by an absence of a universal set of criteria for determining consciousness. By going into depth on the foundations and characteristics of consciousness, we propose five criteria for determining whether a machine is conscious, which can also be applied more generally to any entity. This paper aims to serve as a primer and stepping stone for researchers of consciousness, be they in philosophy, computer science, medicine, or any other field, to further pursue this holy grail of philosophy, neuroscience and artificial intelligence.
5 extraordinary ideas about the mind and what it means to be conscious
Two years after opening our bureau in New York, we are delighted to share that New Scientist is launching a new live event series in the US. This kicks off on 22 June in New York with a one-day masterclass on the science of the brain and human consciousness. To celebrate, we have unlocked access to five in-depth features exploring mysteries of the human mind. There is perhaps no bigger puzzle of human experience than consciousness. In the simplest terms, it is awareness of our existence. It is our experience of ourselves and the world.
Report on Candidate Computational Indicators for Conscious Valenced Experience
This report enlists 13 functional conditions cashed out in computational terms that have been argued to be constituent of conscious valenced experience. These are extracted from existing empirical and theoretical literature on, among others, animal sentience, medical disorders, anaesthetics, philosophy, evolution, neuroscience, and artificial intelligence.
Artificial consciousness. Some logical and conceptual preliminaries
Evers, K., Farisco, M., Chatila, R., Earp, B. D., Freire, I. T., Hamker, F., Nemeth, E., Verschure, P. F. M. J., Khamassi, M.
Is artificial consciousness theoretically possible? Is it plausible? If so, is it technically feasible? To make progress on these questions, it is necessary to lay some groundwork clarifying the logical and empirical conditions for artificial consciousness to arise and the meaning of relevant terms involved. Consciousness is a polysemic word: researchers from different fields, including neuroscience, Artificial Intelligence, robotics, and philosophy, among others, sometimes use different terms in order to refer to the same phenomena or the same terms to refer to different phenomena. In fact, if we want to pursue artificial consciousness, a proper definition of the key concepts is required. Here, after some logical and conceptual preliminaries, we argue for the necessity of using dimensions and profiles of consciousness for a balanced discussion about their possible instantiation or realisation in artificial systems. Our primary goal in this paper is to review the main theoretical questions that arise in the domain of artificial consciousness. On the basis of this review, we propose to assess the issue of artificial consciousness within a multidimensional account. The theoretical possibility of artificial consciousness is already presumed within some theoretical frameworks; however, empirical possibility cannot simply be deduced from these frameworks but needs independent empirical validation. We break down the complexity of consciousness by identifying constituents, components, and dimensions, and reflect pragmatically about the general challenges confronting the creation of artificial consciousness. Despite these challenges, we outline a research strategy for showing how "awareness" as we propose to understand it could plausibly be realised in artificial systems.
Brain-inspired and Self-based Artificial Intelligence
Zeng, Yi, Zhao, Feifei, Zhao, Yuxuan, Zhao, Dongcheng, Lu, Enmeng, Zhang, Qian, Wang, Yuwei, Feng, Hui, Zhao, Zhuoya, Wang, Jihang, Kong, Qingqun, Sun, Yinqian, Li, Yang, Shen, Guobin, Han, Bing, Dong, Yiting, Pan, Wenxuan, He, Xiang, Bao, Aorigele, Wang, Jin
The question "Can machines think?" and the Turing Test to assess whether machines could achieve human-level intelligence is one of the roots of AI. With the philosophical argument "I think, therefore I am", this paper challenge the idea of a "thinking machine" supported by current AIs since there is no sense of self in them. Current artificial intelligence is only seemingly intelligent information processing and does not truly understand or be subjectively aware of oneself and perceive the world with the self as human intelligence does. In this paper, we introduce a Brain-inspired and Self-based Artificial Intelligence (BriSe AI) paradigm. This BriSe AI paradigm is dedicated to coordinating various cognitive functions and learning strategies in a self-organized manner to build human-level AI models and robotic applications. Specifically, BriSe AI emphasizes the crucial role of the Self in shaping the future AI, rooted with a practical hierarchical Self framework, including Perception and Learning, Bodily Self, Autonomous Self, Social Self, and Conceptual Self. The hierarchical framework of the Self highlights self-based environment perception, self-bodily modeling, autonomous interaction with the environment, social interaction and collaboration with others, and even more abstract understanding of the Self. Furthermore, the positive mutual promotion and support among multiple levels of Self, as well as between Self and learning, enhance the BriSe AI's conscious understanding of information and flexible adaptation to complex environments, serving as a driving force propelling BriSe AI towards real Artificial General Intelligence.