Many traditional philosophical questions take new twists in the context of intelligent machines. For example: What is a mind? What is consciousness? Where do we draw the line on responsibility for actions when dealing with robots, computers, programming? Do human beings occupy a privileged place in the universe?
It's hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon's Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, I caught up with the authors about how the continued integration between technology and humans, and their call for a "Digital Magna Carta," a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity. Lisa Kay Solomon: Your new book, Solomon's Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that's been in development for decades.
The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room. The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general. Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
Doing this requires a consistent terminology that makes principled distinctions, and that allows clear operationalization of the different concepts that a science of emotion will use." The study of emotion and its subjective partner feelings is an instance of the brain/mind problem and the need for a consistent terminology extends to the general case considered here. There are many other disciplines and practices concerned with the human mind and the foundations for scientific study are not always clear. For example, a statement that a spider or a robot has a mind could be a scientific or a definitional assertion.
Why call it machine learning when it has nothing to do with thinking machines? Machine learning may not have much to do with thinking machines today, but that wasn't always the case. It's important to realize that when the terms "Machine Learning" and "Artificial Intelligence" were coined in the late 50s, the goal was very much to create thinking machines. They not only thought what we now call "Artificial General Intelligence" was possible, but that it was bound to happen within a matter of decades if not years. And furthermore, they thought that the various things they were exploring -- like logical programming and machine learning -- would be enough to get them there.
In an earlier blog article I wrote about how human intelligence differs from artificial intelligence, namely human intelligence is general intelligence while artificial intelligence is specialized intelligence. The article provides "food for thought" for those who fear technology evolution, and specifically AI. In today's article I offer more reflections on the evolution of AI. Put in simple words, AI is about Thinking Machines. The English computer scientist Alan Turing was the first academic who proposed to consider the question "Can machines think?" in 1950.
This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the "panpsychist" view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose. "Why should we think common sense is a good guide to what the universe is like?" says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. "Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. David Chalmers, a philosophy of mind professor at New York University, laid out the "hard problem of consciousness" in 1995, demonstrating that there was still no answer to the question of what causes consciousness. Traditionally, two dominant perspectives, materialism and dualism, have provided a framework for solving this problem.
An executive guide to the technology and market drivers behind the $135 billion robotics market. Artist Stephanie Dinkins tells a fascinating story about her work with an AI robot made to look like an African-American woman and at times sensing some type of consciousness in the machine. She was speaking at the de Young Museum's Thinking Machines conversation series, along with anthropologist Tobias Rees, Director of Transformation with the Humans Program at the American Institute, Dinkins is Associate Professor of Art at Stony Brook University and her work includes teaching communities about AI and algorithms, and trying to answer questions such as: Can a community trust AI systems they did not create? She has worked with pre-college students in poor neighborhoods in Brooklyn and taught them how to create AI chat bots. They made a chat bot that told "Yo Mamma" jokes - which she said was a success because it showed how AI can be made to reflect local traditions.
Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils. I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea. In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence.
Every day brings considerable AI news, from breakthrough capabilities to dire warnings. A quick read of recent headlines shows both: an AI system that claims to predict dengue fever outbreaks up to three months in advance, and an opinion piece from Henry Kissinger that AI will end the Age of Enlightenment. Then there's the father of AI who doesn't believe there's anything to worry about. Meanwhile, Robert Downey, Jr. is in the midst of developing an eight-part documentary series about AI to air on Netflix. AI is more than just "hot," it's everywhere.
The mechanism of consciousness is one of the most fundamental, exciting, and challenging pursuits in 21st-century science. Although the field of consciousness studies attracts a diverse array of thinkers who posit myriad physical or metaphysical substrates for experience, consciousness must have a neural basis. But where in the brain is conscious experience generated? It would seem that, given this remarkable era of technical and experimental prowess in the neurosciences, we would be homing in on the specific circuits or precise neuronal subpopulations that generate experience. To the contrary, there is still active debate as to whether the neural correlates of consciousness are, in coarse terms, located in the back or the front of the brain (1, 2).