Goto

Collaborating Authors

Results


Neuroscience Weighs in on Physics' Biggest Questions - Issue 107: The Edge

Nautilus

For an empirical science, physics can be remarkably dismissive of some of our most basic observations. We see objects existing in definite locations, but the wave nature of matter washes that away. We perceive time to flow, but how could it, really? We feel ourselves to be free agents, and that's just quaint. Physicists like nothing better than to expose our view of the universe as parochial. But when asked why our impressions are so off, they mumble some excuse and slip out the side door of the party. Physicists, in other words, face the same hard problem of consciousness as neuroscientists do: the problem of bridging objective description and subjective experience. To relate fundamental theory to what we actually observe in the world, they must explain what it means "to observe"--to become conscious of. And they tend to be slapdash about it. They divide the world into "system" and "observer," study the former intensely, and take the latter for granted--or, worse, for a fool.


A Theory of Consciousness from a Theoretical Computer Science Perspective: Insights from the Conscious Turing Machine

arXiv.org Artificial Intelligence

The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. In the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM), and perspective of computational complexity theory, we formalize a modified version of the Global Workspace Theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeaux and others. We are not looking for a complex model of the brain nor of cognition, but for a simple computational model of (the admittedly complex concept of) consciousness. We do this by defining the Conscious Turing Machine (CTM), also called a conscious AI, and then we define consciousness and related notions in the CTM. While these are only mathematical (TCS) definitions, we suggest why the CTM has the feeling of consciousness. The TCS perspective provides a simple formal framework to employ tools from computational complexity theory and machine learning to help us understand consciousness and related concepts. Previously we explored high level explanations for the feelings of pain and pleasure in the CTM. Here we consider three examples related to vision (blindsight, inattentional blindness, and change blindness), followed by discussions of dreams, free will, and altered states of consciousness.


Why a 'genius' scientist thinks our consciousness originates at the quantum level

#artificialintelligence

Human consciousness is one of the grand mysteries of our time on earth. How do you know that you are "you"? Does your sense of being aware of yourself come from your mind or is it your body that is creating it? What really happens when you enter an "altered" state of consciousness with the help of some chemical or plant? While you would think this basic enigma of our self-awareness would be at the forefront of scientific inquiry, science does not yet have strong answers to these questions.


Importance measures derived from random forests: characterisation and extension

arXiv.org Machine Learning

Nowadays new technologies, and especially artificial intelligence, are more and more established in our society. Big data analysis and machine learning, two sub-fields of artificial intelligence, are at the core of many recent breakthroughs in many application fields (e.g., medicine, communication, finance, ...), including some that are strongly related to our day-to-day life (e.g., social networks, computers, smartphones, ...). In machine learning, significant improvements are usually achieved at the price of an increasing computational complexity and thanks to bigger datasets. Currently, cutting-edge models built by the most advanced machine learning algorithms typically became simultaneously very efficient and profitable but also extremely complex. Their complexity is to such an extent that these models are commonly seen as black-boxes providing a prediction or a decision which can not be interpreted or justified. Nevertheless, whether these models are used autonomously or as a simple decision-making support tool, they are already being used in machine learning applications where health and human life are at stake. Therefore, it appears to be an obvious necessity not to blindly believe everything coming out of those models without a detailed understanding of their predictions or decisions. Accordingly, this thesis aims at improving the interpretability of models built by a specific family of machine learning algorithms, the so-called tree-based methods. Several mechanisms have been proposed to interpret these models and we aim along this thesis to improve their understanding, study their properties, and define their limitations.


Conscious AI

arXiv.org Artificial Intelligence

Recent advances in artificial intelligence (AI) have achieved human-scale speed and accuracy for classification tasks. In turn, these capabilities have made AI a viable replacement for many human activities that at their core involve classification, such as basic mechanical and analytical tasks in low-level service jobs. Current systems do not need to be conscious to recognize patterns and classify them. However, for AI to progress to more complicated tasks requiring intuition and empathy, it must develop capabilities such as metathinking, creativity, and empathy akin to human self-awareness or consciousness. We contend that such a paradigm shift is possible only through a fundamental shift in the state of artificial intelligence toward consciousness, a shift similar to what took place for humans through the process of natural selection and evolution. As such, this paper aims to theoretically explore the requirements for the emergence of consciousness in AI. It also provides a principled understanding of how conscious AI can be detected and how it might be manifested in contrast to the dominant paradigm that seeks to ultimately create machines that are linguistically indistinguishable from humans.


A Theoretical Computer Science Perspective on Consciousness

arXiv.org Artificial Intelligence

The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We then suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the breadth of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.


Representation Internal-Manipulation (RIM): A Neuro-Inspired Computational Theory of Consciousness

arXiv.org Artificial Intelligence

Many theories, based on neuroscientific and psychological empirical evidence and on computational concepts, have been elaborated to explain the emergence of consciousness in the central nervous system. These theories propose key fundamental mechanisms to explain consciousness, but they only partially connect such mechanisms to the possible functional and adaptive role of consciousness. Recently, some cognitive and neuroscientific models try to solve this gap by linking consciousness to various aspects of goal-directed behaviour, the pivotal cognitive process that allows mammals to flexibly act in challenging environments. Here we propose the Representation Internal-Manipulation (RIM) theory of consciousness, a theory that links the main elements of consciousness theories to components and functions of goal-directed behaviour, ascribing a central role for consciousness to the goal-directed manipulation of internal representations. This manipulation relies on four specific computational operations to perform the flexible internal adaptation of all key elements of goal-directed computation, from the representations of objects to those of goals, actions, and plans. Finally, we propose the concept of `manipulation agency' relating the sense of agency to the internal manipulation of representations. This allows us to propose that the subjective experience of consciousness is associated to the human capacity to generate and control a simulated internal reality that is vividly perceived and felt through the same perceptual and emotional mechanisms used to tackle the external world.


Neuroscience Readies for a Showdown Over Consciousness Ideas Quanta Magazine

#artificialintelligence

Some problems in science are so hard, we don't really know what meaningful questions to ask about them -- or whether they are even truly solvable by science. Consciousness is one of those: Some researchers think it is an illusion; others say it pervades everything. Some hope to see it reduced to the underlying biology of neurons firing; others say that it is an irreducibly holistic phenomenon. The question of what kinds of physical systems are conscious "is one of the deepest, most fascinating problems in all of science," wrote the computer scientist Scott Aaronson of the University of Texas at Austin. "I don't know of any philosophical reason why [it] should be inherently unsolvable" -- but "humans seem nowhere close to solving it." Now a new project currently under review hopes to close in on some answers. It proposes to draw up a suite of experiments that will expose theories of consciousness to a merciless spotlight, in the hope of ruling out at least some of them.


The idea that everything from spoons to stones is conscious is gaining academic credibility

#artificialintelligence

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the "panpsychist" view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose. "Why should we think common sense is a good guide to what the universe is like?" says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. "Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. David Chalmers, a philosophy of mind professor at New York University, laid out the "hard problem of consciousness" in 1995, demonstrating that there was still no answer to the question of what causes consciousness. Traditionally, two dominant perspectives, materialism and dualism, have provided a framework for solving this problem.


Consciousness Began When the Gods Stopped Speaking - Issue 54: The Unspoken

Nautilus

He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice.