From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting. The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate. The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.
"I propose to consider the question, 'can machines think?' This should begin with definitions of the meaning of the word'machine' and'think'." So wrote computing pioneer Alan Turing in the introduction to his seminal paper on machine intelligence, published in the journal Mind in 1950. That paper – introducing the'imitation game', Turing's adversarial test for machine intelligence, in which a human must decide whether what we now call a chatbot is a human or a computer – helped spark the field of research which later became more widely known as artificial intelligence. Whilst no researcher has yet made a general purpose thinking machine – what's known as artificial general intelligence – that passes Turing's test, a wide variety of special purpose AIs have been created to focus on, and solve, very specific problems, such as image and speech recognition, and defeating chess and Go champions.
In the 18th century, those operating at the highest levels of society, from London to Moscow, needed to be able to speak French, then the language of status, the nobility, politics, intellectual life and modernisation. A hundred years later, British advances in industry, science and engineering meant that English succeeded French: a tongue with West Germanic origins replaced a romance language as the means of conducting business and diplomacy on the international stage. Today, even in some parts of China, English is still used as the global lingua franca, a leveller that enables deals to get done and the wheels of commerce and technology to spin. Around a decade ago, another type of language – one that was written rather than spoken – was held up as a deterministic factor for those seeking to gain influence or advantage in the digital age: coding. Its champions proselytised that proficiency in programming would determine employability and access to a thrusting, energetic entrepreneurial future. Over two or three years, a small industry sprang up, intent on instructing those who had no formal education in computer science to create products using C .
In his 1990 book The Age of Intelligent Machines, the American computer scientist and futurist Ray Kurzweil made an astonishing prediction. Working at the Massachusetts Institute of Technology (MIT) throughout the 1970s and 1980s and having seen firsthand the remarkable advances in artificial intelligence pioneered there by Marvin Minsky and others, he forecast that a computer would pass the Turing test – the test of a machine's ability to match or be indistinguishable from human intelligence – between 2020 and 2050.