Chinese room - Wikipedia

#artificialintelligence

The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[1] The centerpiece of the argument is a thought experiment known as the Chinese room.[2] The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5] Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.


Artificial Intelligence: Structures and Strategies for Complex Problem Solving

AITopics Original Links

Many and long were the conversations between Lord Byron and Shelley to which I was a devout and silent listener. During one of these, various philosophical doctrines were discussed, and among others the nature of the principle of life, and whether there was any probability of its ever being discovered and communicated. They talked of the experiments of Dr. Darwin (I speak not of what the doctor really did or said that he did, but, as more to my purpose, of what was then spoken of as having been done by him), who preserved a piece of vermicelli in a glass case till by some extraordinary means it began to move with a voluntary motion. Not thus, after all, would life be given. Perhaps a corpse would be reanimated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth (Butler 1998).


A neuroscientist explains why artificially intelligent robots will never have consciousness like humans

#artificialintelligence

Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us--or perhaps just dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, "The development of full artificial intelligence could spell the end of the human race." Indeed, there is little doubt that future A.I. will be capable of doing significant damage. For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.


A neuroscientist explains why artificially intelligent robots will never have consciousness like humans

#artificialintelligence

Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us--or perhaps just dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, "The development of full artificial intelligence could spell the end of the human race." Indeed, there is little doubt that future A.I. will be capable of doing significant damage. For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.


Rethinking Weak Vs. Strong AI

#artificialintelligence

Artificial intelligence has a broad range of ways in which it can be applied - from chatbots to predictive analytics, from recognition systems to autonomous vehicles, and many other patterns. However, there is also the big overarching goal of AI: to make a machine intelligent enough that it can handle any general cognitive task in any setting, just like our own human brains. The general AI ecosystem classifies these AI efforts into two major buckets: weak (narrow) AI that is focused on one particular problem or task domain, and strong (general) AI that focuses on building intelligence that can handle any task or problem in any domain. From the perspectives of researchers, the more an AI system approaches the abilities of a human, with all the intelligence, emotion, and broad applicability of knowledge of humans, the "stronger" that AI is. On the other hand the more narrow in scope, specific to a particular application the AI system is, the weaker it is in comparison.