Plotting

Computing machinery and intelligence

Classics

An excellent place to start. In this article, Turing not only proposes the Imitation Game in its original form, but addresses nine different arguments against AI, including Goedel's theorem and consciousness. Several recent arguments against AI are variations on the ones Turing enumerates. 'I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous....The new form of the problem can be described in terms of a game which we call the "imitation game."' I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind 59, p. 433-460 (PDF from Oxford University Press).


Programming a computer for playing chess

Classics

Full text available for a fee. (The paper was first presented in March 1950 at the National Institute for Radio Engineers Convention in New York.)See also: summary slidesPhilosophical Magazine (Series 7) 41:256-275


Statistics for the chess computer and the factor of mobility

Classics

In Symposium on Information Theory, pp. 150-152. Ministry of Supply. See also: Computer chess compendium citation in ACM Digital Library: http://dl.acm.org/citation.cfm?id=67012.


i, Robot

Classics

We can't detect a cable signal. Please check that your coax cables are tightly secured. If the cables are secured, and you're trying to activate your Gateway, try another cable outlet in your house.


Giant Brains, or Machines That Think

Classics

New York: Wiley. See also: SAO/NASA ADS General Science Abstract Service (http://adsabs.harvard.edu/abs/1950SciMo..70..342B).


Intelligent machinery

Classics

See also:Full Text (scanned)Google BooksTech. rep., National Physical Laboratory. (Reprinted in Meltzer, Bernard and Donald Michie (Eds.), Machine Intelligence 5. Edinburgh University Press. (Also in Ince, 1992)



A logical calculus of the ideas immanent in nervous activity

Classics

Oliver Selfridge in The Gardens of Learning wrote: "I have watched AI since its beginnings... In 1943, I was an undergraduate at the Massachusetts Institute of Technology (MIT) and met a man whom I was soon to be a roommate with. He was but three years older than I, and he was writing what I deem to be the first directed and solid piece of work in AI (McCulloch and Pitts 1943) His name was Walter Pitts, and he had teamed up with a neurophysiologist named Warren McCulloch, who was busy finding out how neurons worked (McCulloch and Pitts 1943).... Figure 1 shows a couple of examples of neural nets from this paper---the first AI paper ever." From the introduction to the Warren S. McCulloch Papers, American Philosophical Society.http://www.amphilsoc.org/mole/view?docId=ead/Mss.B.M139-ead.xml;query=;brand=defaultAlthough an important figure in the early development of computing, McCulloch's goal in research was as much to lay bare the foundations for how we think as it was to develop practical applications - or in other words, to develop an "experimental epistemology" with which to relate mind and brain. Perhaps the most significant work to emerge from this period of McCulloch's career was his landmark paper with Walter Pitts, "A Logical Calculus Immanent in Nervous Activity" ( Bulletin of Mathematical Biophysics 5 (1943): 115-133). The "Logical calculus" was an attempt to develop just that: a rigorous description of neural activity independent of resort to theories of a soul or mind. Together with McCulloch and Pitts' follow-up work, "How we know universals: The perception of auditory and visual forms" ( Bulletin of Mathematical Biophysics 9 (1947) 127-147), the "Logical calculus" provided a compact mathematical model for understanding neural relationships laying the groundwork for neural network theory and automata theory, and forming the ur-foundation of modern computation (through John Von Neumannn) and cybernetics. (See Marvin Minsky, Computation: Finite and Infinite Machines, Englewood Cliffs, NJ: Prentice-Hall, 1967, for a very readable treatment of the computational aspects of McCulloch/Pitts neurons.")Bulletin of Mathematical Biophysics, 5, 115–137


A general theory of learning and conditioning: Part I

Classics

Psyckometrika, March 1943, Volume 8. Issue 1, pp. 1-18. Oliver Selfridge (in his 1993 Gardens of Learning paper) called the 1943 paper by McCullough & Pitts "the first AI paper ever". See also: A general theory of learning and conditioning: Part II, Psychometrika, June 1943, Volume 8, Issue 2, pp. 131-140 (https://link.springer.com/article/10.1007/BF02288697).