Goto

Collaborating Authors

The 4 ingredients to create consciousness could explain our own minds

New Scientist

WHAT is it like to be a bat? Philosopher Thomas Nagel's 1974 question has evolved to dominate our thinking on consciousness. Nagel's point, simply put, is that even if we could fly, and navigate using sonar, we would never grasp what it feels like to be a bat. The argument has become the "hard problem" of consciousness, the intractability of explaining subjective experience. Consciousness isn't something you can measure or weigh; its ethereal quality is so fascinating as to verge on the mystical.


Are You a Thinking Thing? Why Debating Machine Consciousness Matters

#artificialintelligence

In 1637, when he published, The Discourse on Method, René Descartes unleashed a philosophical breakthrough, which later became a fundamental principle that much of modern philosophy now stands upon. Nearly 400 years later, if a machine says these five powerful words, "I think therefore I am," does the statement still hold true? If so, who then is this "I" that is doing the thinking? In a recent talk, Ray Kurzweil showed the complexity of measuring machine consciousness, "We can't just ask an entity, 'Are you conscious?' because we can ask entities in video games today, and they'll say, 'Yes, I'm conscious and I'm angry at you.' But we don't believe them because they don't have the subtle cues that we associate with really having that subjective state.


Conscious Machines: The AI Perspective

AAAI Conferences

Efforts to study computational aspects of the conscious mind have made substantial progress, but have yet to provide a compelling route to creating a phenomenally conscious machine. Here I suggest that an important reason for this is the computational explanatory gap: our inability to explain the implementation of high level cognitive algorithms that are of interest in AI in terms of neurocomputational processing. Bridging this gap could contribute to further progress in machine consciousness, to producing artificial general intelligence, and to understanding the fundamental nature of consciousness.


Nelson

AAAI Conferences

Over the last several decades research efforts have explored various forms of artificial life and embodied artificial life as methods for developing autonomous agents. Such approaches, although a part of the AI canon, are rarely used in research aimed at creating artificial general intelligence. This paper explores the prospects of using in silicoartificial evolution to develop machine consciousness, or strong AI. It is possible that artificial evolution and situated self-organizing agents could become viable tools for studying machine consciousness, but there are several issues that must be overcome. One problem is the use of exogenous selection methods to drive artificial evolutionary processes. A second problem relates to agent representation that is inconsistent with the environment in which the agents are situated. These issues limit the potential for open-ended evolution and fine-grained fitting of agents to environment, which are likely to be important for the eventual development of situated artificial consciousness.


Machine Consciousness Discussion Penrose, Bach & Neven

@machinelearnbot

Want to watch this again later? Sign in to add this video to a playlist. Report Need to report the video? Sign in to report inappropriate content. Report Need to report the video?