Computer pioneer and artificial intelligence (AI) theorist Alan Turing would have been 100 years old this Saturday. To mark the anniversary the BBC has commissioned a series of essays. In this, the fourth article, his influence on AI research and the resulting controversy are explored. Alan Turing was clearly a man ahead of his time. In 1950, at the dawn of computing, he was already grappling with the question: "Can machines think?"
"Yes ma'am, Roll-Oh get," the robot says, as it goes to the kitchen to prepare dinner, opening a can of food and blowing fire to a candle. Roll-Oh walks clumsily, suspiciously like a man in an uncomfortable costume. Nonetheless, it ably frees the domestic housewife of all her daily chores, at the simple press of a button. This was the promise of robotics, demonstrated in the 1940 short film "Leave it to Roll-Oh", presented at the New York World's Fair. Mechanical robotics already automate so much of our lives, the film argues, that it will be only a matter of time before we can expect personal, four-limbed metal people as ready-made servants: watering our plants, greeting our mailman, helping cook dinner.
Turing's paper "Computing Machinery and Intelligence" (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence. At its core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines. The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.
As mentioned already on The Conversation and other websites, this year marks the 100th anniversary of the birth of famed British mathematician Alan Turing. According to some, computer intelligence is on course to match human intelligence by 2045. The outline of his remarkable life and sad ending has by now become fairly well known. Turing laid numerous foundation stones of modern computing, ranging from the deepest mathematical nature of computing (using what are now called Turing machines, he provided the modern approach to incompleteness (PDF) and undecidability) to specific issues of practical design; he also contributed to mathematical biology (morphology) and much else. At the same time, he played a key role in the British government's breaking of the German Enigma code at the now-fabled, but then ultra-secret, Bletchley Park, thus arguably accelerating the end of the second world war.
In 1936, whilst studying for his Ph.D. at Princeton University, the English mathematician Alan Turing published a paper, "On Computable Numbers, with an application to the Entscheidungsproblem," which became the foundation of computer science. In it Turing presented a theoretical machine that could solve any problem that could be described by simple instructions encoded on a paper tape. One Turing Machine could calculate square roots, whilst another might solve Sudoku puzzles. Turing demonstrated you could construct a single Universal Machine that could simulate any Turing Machine. One machine solving any problem, performing any task for which a program could be written--sound familiar?