Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
This week we are reprinting our top stories of 2020. This article first appeared online in our Facts So Romantic blog in May, 2020. In May, the cover of New Scientist ran the headline, "Is the Universe Conscious?" Mathematician and physicist Johannes Kleiner, at the Munich Center for Mathematical Philosophy in Germany, told author Michael Brooks that a mathematically precise definition of consciousness could mean that the cosmos is suffused with subjective experience. "This could be the beginning of a scientific revolution," Kleiner said, referring to research he and others have been conducting.
One of the most important causes of frequent confusion, is the difference between Artificial Intelligence and Artificial General Intelligence. Yes AI is pretty much just processing information but because intelligence has been redefined in this field. It's like how physicists use the term information in the context of entropy, it's a crucial, but somewhat slippery, distinction that contradicts everyday usage of the word. AI was originally intended to be used like AGI is now, but as people worked on it & we have started to understand how sophisticated things our brains do that we take for granted, the goalposts have moved. And AI has come to mean any little step on the journey to more sophisticated computers.
The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. This paper studies consciousness from the perspective of theoretical computer science. It formalizes the Global Workspace Theory (GWT) originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, and others. Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness. After formally defining CTM, we give a formal definition of consciousness in CTM. We then suggest why the CTM has the feeling of consciousness. The reasonableness of the definitions and explanations can be judged by how well they agree with commonly accepted intuitive concepts of human consciousness, the breadth of related concepts that the model explains easily and naturally, and the extent of its agreement with scientific evidence.
As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, "AI is a system built on the foundations of computer logic, and when Silicon Valley's AI theorists push the logic of their case to a "singularity," they defy the most crucial findings of twentieth-century mathematics and computer science." Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (1839–1914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic).
Can machines (or computers) think? What did Alan Turing have to say to that question? Well, he believed that the question is too "meaningless to answer". "The original question, 'Can machines think?', I believe to be too meaningless to deserve discussion." In other words, how can we even answer that question if we don't really know what thinking actually is in the first place?
A professor of neuroscience at the University of Surrey claims to have solved the long-standing mystery of what creates human consciousness. According to Dr Johnjoe McFadden, the electromagnetic field produced by the brain's neurons is what produces this uniquely human trait. Vast amounts of research has gone into deciphering why we have the ability to know we think, whereas other animals do not. Previous attempts to understand this have included the spiritual and supernatural, including suggesting it comes from a soul. But Professor McFadden is basing his theory, published in the journal Neuroscience of Consciousness, on well-known scientific fact.
When Lee Sedol, champion of Chinese game'Go' was defeated by DeepMind (powered AI AlphaGo), the most common quote on social media was: "You lost and you cried, the computer won but it did not smile" What can be the holy grail of artificial intelligence (AI)? It is not the memory -- today supercomputers can store more information than an average human brain with 100 billion neurons. It is also not computing power, which has long been exceeded by AI machines with petaflops (a unit of computing speed equal to one thousand million million floating-point operations per second). So far, we believed that computers could never learn, they needed to be hard-wired for certain goals. With neural networks and machine learning that citadel is increasingly getting invaded.
The scientific study of consciousness is currently undergoing a critical transition in the form of a rapidly evolving scientific debate regarding whether or not currently proposed theories can be assessed for their scientific validity. At the forefront of this debate is Integrated Information Theory (IIT), widely regarded as the preeminent theory of consciousness because of its quantification of consciousness in terms a scalar mathematical measure called $\Phi$ that is, in principle, measurable. Epistemological issues in the form of the "unfolding argument" have provided a refutation of IIT by demonstrating how it permits functionally identical systems to have differences in their predicted consciousness. The implication is that IIT and any other proposed theory based on a system's causal structure may already be falsified even in the absence of experimental refutation. However, so far the arguments surrounding the issue of falsification of theories of consciousness are too abstract to readily determine the scope of their validity. Here, we make these abstract arguments concrete by providing a simple example of functionally equivalent machines realizable with table-top electronics that take the form of isomorphic digital circuits with and without feedback. This allows us to explicitly demonstrate the different levels of abstraction at which a theory of consciousness can be assessed. Within this computational hierarchy, we show how IIT is simultaneously falsified at the finite-state automaton (FSA) level and unfalsifiable at the combinatorial state automaton (CSA) level. We use this example to illustrate a more general set of criteria for theories of consciousness: to avoid being unfalsifiable or already falsified scientific theories of consciousness must be invariant with respect to changes that leave the inference procedure fixed at a given level in a computational hierarchy.
The question about the emergence of consciousness is perhaps the most important question humanity should attempt to answer. Consciousness and its contents are at the root of everything. Consciousness is what is responsible for all of the greatest artifacts of culture that humanity has created: art, music, science, philosophy, technology. Every child, adolescent, and adult ought to ask themselves: what is consciousness? What is it like to be human?