From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting. The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate. The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.
British healthcare workers are hostile to their robotic co-workers, committing "minor acts of sabotage" such as standing in their way, according to a recent study by De Montfort University, which chided the humans for "not playing along with" their automated peers. The researchers contrasted the "problematic" British attitude with that of Norwegian workers, who embraced their silicon colleagues, even giving them friendly nicknames. Some 30 percent of UK jobs will be lost to automation within 15 years if current trends continue apace, according to PricewaterhouseCoopers. The percentage is even greater in the US (38 percent) as well as Germany and France (37 percent), but falls to 25 percent in Scandinavian countries like Norway and Finland. Perhaps this explains the difference in workplace interactions between the British and the Norwegians - the latter aren't as worried about losing their jobs to an electronic interloper.
In the 18th century, those operating at the highest levels of society, from London to Moscow, needed to be able to speak French, then the language of status, the nobility, politics, intellectual life and modernisation. A hundred years later, British advances in industry, science and engineering meant that English succeeded French: a tongue with West Germanic origins replaced a romance language as the means of conducting business and diplomacy on the international stage. Today, even in some parts of China, English is still used as the global lingua franca, a leveller that enables deals to get done and the wheels of commerce and technology to spin. Around a decade ago, another type of language – one that was written rather than spoken – was held up as a deterministic factor for those seeking to gain influence or advantage in the digital age: coding. Its champions proselytised that proficiency in programming would determine employability and access to a thrusting, energetic entrepreneurial future. Over two or three years, a small industry sprang up, intent on instructing those who had no formal education in computer science to create products using C .
In his 1990 book The Age of Intelligent Machines, the American computer scientist and futurist Ray Kurzweil made an astonishing prediction. Working at the Massachusetts Institute of Technology (MIT) throughout the 1970s and 1980s and having seen firsthand the remarkable advances in artificial intelligence pioneered there by Marvin Minsky and others, he forecast that a computer would pass the Turing test – the test of a machine's ability to match or be indistinguishable from human intelligence – between 2020 and 2050.