Is It Enough to Get the Behaviour Right?

AAAI Conferences

This paper deals with the relationship between intelligent behaviour, on the   one hand, and the mental qualities needed to produce it, on the other.  We   consider two well-known opposing positions on this issue: one due to Alan   Turing and one due to John Searle (via the Chinese Room).  In particular, we   argue against Searle, showing that his answer to the so-called System Reply   does not work.  The argument takes a novel form:   we shift the debate to a different and more plausible room where the   required conversational behaviour is much easier to characterize and to   analyze.  Despite being much simpler than the Chinese Room, we show that    the  behaviour there is still complex enough that it cannot be produced without   appropriate mental qualities.


Artificial Intelligence (AI): A non-intelligent intelligence?

#artificialintelligence

Every time someone gets a computer or a robot to play or solve a new task or problem, a lot of people come out to remind us that the activity of the machine is not a genuine intelligence (that is, supposedly like ours): it would be a simple computation carried out thanks to the human capacity to program something that does not cease to be a piece of silicon, sheet metal, and wires. This is called "Artificial Intelligence effect" and is widely extended. No matter the feat achieved by the machine: if it defeats the chess world champion, it is taken as a mere computation (remarkable, yes, but nothing to do with real intelligence). We do not accept that there is a genuine intelligence as far as we can understand how the machine works to do something or give an answer to a problem. Not to talk about attributing consciousness to a supercomputer or assuming that it could suffer from mental illnesses (this would be the case if having a mind).


John Searle's Syntax-vs.-Semantics Argument Against Artificial Intelligence (AI)

#artificialintelligence

This is a simple introduction to the philosopher John Searle's main argument against artificial intelligence (AI). This means that it doesn't come down either for or against that argument. The main body of the Searle's argument is how he distinguishes syntax from syntax. Thus the well-known Chinese Room scenario is simply Searle's means of expressing what he sees as the vital distinction to be made between syntax and semantics when it comes to debates about computers and AI generally. One way in which John Searle puts his case is by reference to reference. That position is summed up simply when Searle (in his'Minds, Brains, and Programs' of 1980) writes: "Whereas the English subsystem knows that'hamburgers' refers to hamburgers, the Chinese subsystem knows only that'squiggle squiggle' is followed by'squoggle squoggle'." So whereas what Searle calls the "English subsystem" involves a complex reference-relation which involved entities in the world, mental states, knowledge of meanings, intentionality, consciousness, memory and other such things; the Chinese subsystem is only following rules.


Chinese Room Argument Internet Encyclopedia of Philosophy

AITopics Original Links

The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)---that is, to claims that computers do or at least can (someday might) think. According to Searle's original presentation, the argument is based on two key claims: brains cause minds and syntax doesn't suffice for semantics. Its target is what Searle dubs "strong AI." According to strong AI, Searle says, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states" (1980a, p. 417). Searle contrasts strong AI with "weak AI."


Chinese room - Wikipedia

#artificialintelligence

The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[1] The centerpiece of the argument is a thought experiment known as the Chinese room.[2] The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5] Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.