Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
This month, the cover of New Scientist ran the headline, "Is the Universe Conscious?" Mathematician and physicist Johannes Kleiner, at the Munich Center for Mathematical Philosophy in Germany, told author Michael Brooks that a mathematically precise definition of consciousness could mean that the cosmos is suffused with subjective experience. "This could be the beginning of a scientific revolution," Kleiner said, referring to research he and others have been conducting. Kleiner and his colleagues are focused on the Integrated Information Theory of consciousness, one of the more prominent theories of consciousness today. As Kleiner notes, IIT (as the theory is known) is thoroughly panpsychist because all integrated information has at least one bit of consciousness.
According to the Brookings Institute, AI is generally thought to refer to "machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention." More simply put, AI uses algorithms to make decisions using real-time data. But unlike more traditional machines that can only respond in predetermined ways, AI can act on data – it can analyze it and respond to it. The concept has been evolving and the technology has become more sophisticated, but it's still a little nebulous – particularly for folks working in local government. It seems everyone kind of knows what AI is, but no one is exactly sure how they can apply it in their communities.
Artificial intelligence is a bit of a buzz term these days – but what do people really mean when they say AI? And why should local governments care? First of all, AI is extremely misunderstood. We aren't talking about HAL from "2001: A Space Odyssey," necessarily; we're talking about what Alan Turing speculated about "thinking machines" back in the 1950s. According to the Brookings Institute, AI is generally thought to refer to "machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention."
Philip Pullman is once again having a moment, thanks to the new blockbuster adaptation of His Dark Materials by the BBC and HBO. His fantasy classic--filled with witches, talking bears and "daemons" (people's alter-egos that take animal form)--is rendered in glorious steampunk detail. Pullman has also returned to the fictional world of his heroine, Lyra Belacqua, with a new trilogy, The Book of Dust, which probes more deeply into the central question of his earlier books: What is the nature of consciousness? Pullman loves to write about big ideas, and recent scientific discoveries about dark matter and the Higgs boson have inspired certain plot elements in his novels. The biggest mystery in these books--an enigmatic substance called Dust--comes right out of current debates among scientists and philosophers about the origins of consciousness and the provocative theory of panpsychism.
This is a simple introduction to the philosopher John Searle's main argument against artificial intelligence (AI). This means that it doesn't come down either for or against that argument. The main body of the Searle's argument is how he distinguishes syntax from syntax. Thus the well-known Chinese Room scenario is simply Searle's means of expressing what he sees as the vital distinction to be made between syntax and semantics when it comes to debates about computers and AI generally. One way in which John Searle puts his case is by reference to reference. That position is summed up simply when Searle (in his'Minds, Brains, and Programs' of 1980) writes: "Whereas the English subsystem knows that'hamburgers' refers to hamburgers, the Chinese subsystem knows only that'squiggle squiggle' is followed by'squoggle squoggle'." So whereas what Searle calls the "English subsystem" involves a complex reference-relation which involved entities in the world, mental states, knowledge of meanings, intentionality, consciousness, memory and other such things; the Chinese subsystem is only following rules.
The nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer--all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness. But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics?
The greater use of artificial intelligence (AI) and autonomous systems by the militaries of the world has the potential to affect deterrence strategies and escalation dynamics in crises and conflicts. Up until now, deterrence has involved humans trying to dissuade other humans from taking particular courses of action. What happens when the thinking and decision processes involved are no longer purely human? How might dynamics change when decisions and actions can be taken at machine speeds? How might AI and autonomy affect the ways that countries have developed to signal one another about the potential use of force?
Many theories, based on neuroscientific and psychological empirical evidence and on computational concepts, have been elaborated to explain the emergence of consciousness in the central nervous system. These theories propose key fundamental mechanisms to explain consciousness, but they only partially connect such mechanisms to the possible functional and adaptive role of consciousness. Recently, some cognitive and neuroscientific models try to solve this gap by linking consciousness to various aspects of goal-directed behaviour, the pivotal cognitive process that allows mammals to flexibly act in challenging environments. Here we propose the Representation Internal-Manipulation (RIM) theory of consciousness, a theory that links the main elements of consciousness theories to components and functions of goal-directed behaviour, ascribing a central role for consciousness to the goal-directed manipulation of internal representations. This manipulation relies on four specific computational operations to perform the flexible internal adaptation of all key elements of goal-directed computation, from the representations of objects to those of goals, actions, and plans. Finally, we propose the concept of `manipulation agency' relating the sense of agency to the internal manipulation of representations. This allows us to propose that the subjective experience of consciousness is associated to the human capacity to generate and control a simulated internal reality that is vividly perceived and felt through the same perceptual and emotional mechanisms used to tackle the external world.