Today the field has progressed to the point where algorithms can recognize photos, speech and emotions, fly a drone or drive a truck, spot early signs of diabetes or cancer, and play chess and poker at a championship level. Now, in a leap that could be futuristic, absurd, or life-changing (nobody can predict which), the vision is of a robotics religion that worships an AI godhead. Anthony Levandowski, known for his contribution to driverless cars and a pioneering visionary of AI, gained wide media attention by actually forming an AI church named The Way of the Future. He is searching for adherents, and foresees an AI godhead as not ridiculous but inevitable. As he told an interviewer from Wired magazine, ""It's not a god in the sense that it makes lightning or causes hurricanes.
As computers gain speed and accomplish dazzling feats like defeating the world's masters at games of chess and Go, some of the planet's brightest minds -- Elon Musk and Stephen Hawking among them -- warn that we human beings may find ourselves obsolete. Further, a kind of artificial intelligence arms race may come to dominate geopolitics, rewarding the owners of the best AI mining the biggest pools of "big data" -- most likely, as a result of its sheer size, China.
Parts of this essay by Andrew Smart are adapted from his book Beyond Zero And One (2015), published by OR Books. Machine intelligence is growing at an increasingly rapid pace. The leading minds on the cutting edge of AI research think that machines with human-level intelligence will likely be realized by the year 2100. Beyond this, artificial intelligences that far outstrip human intelligence would rapidly be created by the human-level AIs. This vastly superhuman AI will result from an "intelligence explosion."
Deepak Chopra pulls back his sleeve to show me his Fitbit. His heart rate is 69 beats per minute. "I can consciously bring it down," he says. "This technology is helping me become a better Yogi." Chopra, a physician and celebrity wellness adviser, is known for promoting the benefits of meditation -- clearing the distractions that clutter the mind.
The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room. The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general. Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.