Goto

Collaborating Authors

Chinese room - Wikipedia

#artificialintelligence

The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness",[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[1] The centerpiece of the argument is a thought experiment known as the Chinese room.[2] The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols. The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b] Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5] Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.


THE AGE of INTELLIGENT MACHINES Can Computers Think?

AITopics Original Links

The complexities of the mind mirror the challenges of Artificial Intelligence. This article discusses the nature of thought itself–can it be replicated in a machine? From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990. At a time when computer technology is advancing at a breakneck pace and when software developers are glibly hawking their wares as having artificial intelligence, the inevitable question has begun to take on a certain urgency: Can a computer think? In one form or another this is actually a very old question, dating back to such philosophers as Plato, Aristotle, and Descartes. And after nearly 3,000 years the most honest answer is still "Who knows?" After all, what does it mean to think? So let's try some others.


Chinese Room Argument Internet Encyclopedia of Philosophy

AITopics Original Links

The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)---that is, to claims that computers do or at least can (someday might) think. According to Searle's original presentation, the argument is based on two key claims: brains cause minds and syntax doesn't suffice for semantics. Its target is what Searle dubs "strong AI." According to strong AI, Searle says, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states" (1980a, p. 417). Searle contrasts strong AI with "weak AI."


Turing Test: What is it – and why isn't it the definitive word in

AITopics Original Links

The news that the Turing Test has been beaten by a computer for the first time could have significant implications for artificial intelligence – but just what is the Turing test and what does beating it actually mean? The test was first proposed by the British mathematician and computer scientist Alan Turing who, in his 1950 paper'Computing Machinery and Intelligence', asked a simple question: 'Can machines think?' Turing later finessed this to'can machines do what we (as thinking entities) can do?' and proposed his eponymous test as one way of finding out. In its most simple form the test has a human interrogator speaking to a number of computers and humans through an interface. If the interrogator cannot distinguish between the computers and the humans then the Turing Test has been passed. There are many different takes on the test (in some variations the interrogator knows that one of the entities they are questioning is a computer – in others they don't) but many computer scientists and philosophers have criticized its very premises for only assessing the appearance of intelligence.


Artificial Intelligence is stupid and causal reasoning won't fix it

arXiv.org Artificial Intelligence

Artificial Neural Networks have reached Grandmaster and even super-human performance across a variety of games: from those involving perfect-information (such as Go) to those involving imperfect-information (such as Starcraft). Such technological developments from AI-labs have ushered concomitant applications across the world of business - where an AI brand tag is fast becoming ubiquitous. A corollary of such widespread commercial deployment is that when AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits racist behaviour; automated credit scoring processes discriminate on gender etc. - there are often significant financial, legal and brand consequences and the incident becomes major news. As Judea Pearl sees it, the underlying reason for such mistakes is that, 'all the impressive achievements of deep learning amount to just curve fitting'. The key, Judea Pearl suggests, is to replace reasoning by association with causal-reasoning - the ability to infer causes from observed phenomena. It is a point that was echoed by Gary Marcus and Ernest Davis in a recent piece for the New York Times: 'we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets - often using an approach known as Deep Learning - and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality'. In this paper, foregrounding what in 1949 Gilbert Ryle termed a category mistake, I will offer an alternative explanation for AI errors: it is not so much that AI machinery cannot grasp causality, but that AI machinery - qua computation - cannot understand anything at all.