Collaborating Authors

Unthinkable: Could we make a computer trip on acid?


For such a vital human capacity, consciousness is still a mystery to us. The question "What is consciousness?" has long been explored by philosophers but traditionally shunned by scientists, "because it was considered'spooky' or too vague or new-agey", according to author Andrew Smart. But that is starting to change, with neuroscientists and theorists in artificial intelligence joining the quest to locate what might be called the defining characteristic of humanity. Smart's new book Beyond Zero and One: Machines, Psychedelics and Consciousness puts forward a tantalising hypothesis: that consciousness is a type of hallucination that may have evolved through the aeons as a survival mechanism. To answer the question "What is consciousness?" one must imagine how a computer could become human, he says.

Are AI 'Thinking Machines' Really Thinking?


Since the development of the first universal computers scientists have postulated the existence of an artificial consciousness; a constructed system that can mirror the complex interactions that take place within the human brain. While some public figures are openly terrified about the coming cyborg apocalypse, for most people artificial intelligence these days refers to tools and applications that can help us get our work done faster, rather than androids and artificial people. AI is now predominantly considered as a narrow use of a particular type of technology, distinct from artificial general intelligence (AGI), a much broader concept that encompasses synthetic consciousness. Considering the growth in the field of AI over the past decade or so, and the massive ongoing investment, it is worth exploring just how far along the path we have travelled towards Terminators, replicants and R2-D2, and the problems that have presented themselves. Many scientists and thinkers believe that AGI is a scientific inevitability based on the concept of universality, while others suggest that there are ontological physical limitations that prevent the recreation of consciousness.

What is consciousness, and could machines have it?


The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word "consciousness" conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

Conscious Machines: The AI Perspective

AAAI Conferences

Efforts to study computational aspects of the conscious mind have made substantial progress, but have yet to provide a compelling route to creating a phenomenally conscious machine. Here I suggest that an important reason for this is the computational explanatory gap: our inability to explain the implementation of high level cognitive algorithms that are of interest in AI in terms of neurocomputational processing. Bridging this gap could contribute to further progress in machine consciousness, to producing artificial general intelligence, and to understanding the fundamental nature of consciousness.

Does this chess problem reveal the key to human consciousness?


Artificial intelligence hasn't taken over the world ... yet. But while humans can still outperform computers on most high-level intelligence tasks, at this point most people would concede the game of chess to the machines. The best chess-playing computer programs can already school just about any average human player, and they've proven capable of beating our grandmasters too. But maybe there's still some hope for us, even when it comes to chess. Scientists with the Penrose Institute have devised a unique chess problem that's fairly simple for humans to solve, but which seems to irreparably stump even the most sophisticated of chess programs.