Goto

Collaborating Authors

AI winter - Wikipedia

#artificialintelligence

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky--two leading AI researchers who had survived the "winter" of the 1970s--warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.[2] Hype is common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists.


Ray Kurzweil on How We'll End Up Merging With Our Technology

#artificialintelligence

Dormehl starts with the 1964 World's Fair -- held only miles from where I lived as a high school student in Queens -- evoking the anticipation of a nation working on sending a man to the moon. He identifies the early examples of artificial intelligence that captured my own excitement at the time, like IBM's demonstrations of automated handwriting recognition and language translation. He writes as if he had been there. Dormehl describes the early bifurcation of the field into the Symbolic and Connectionist schools, and he captures key points that many historians miss, such as the uncanny confidence of Frank Rosenblatt, the Cornell professor who pioneered the first popular neural network (he called them "perceptrons"). I visited Rosenblatt in 1962 when I was 14, and he was indeed making fantastic claims for this technology, saying it would eventually perform a very wide range of tasks at human levels, including speech recognition, translation and even language comprehension.


History of artificial intelligence - Wikipedia, the free encyclopedia

#artificialintelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."[1] The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956.[2] Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with 50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5] In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.[6] Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as J?bir ibn Hayy?n's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel?apek's


The Birth of AI and The First AI Hype Cycle

@machinelearnbot

Every decade seems to have its technological buzzwords: we had personal computers in 1980s; Internet and worldwide web in 1990s; smart phones and social media in 2000s; and Artificial Intelligence (AI) and Machine Learning in this decade. While artificial intelligence (AI) is among today's most popular topics, a commonly forgotten fact is that it was actually born in 1950 and went through a hype cycle between 1956 and 1982. The purpose of this article is to highlight some of the achievements that took place during the boom phase of this cycle and explain what led to its bust phase. The lessons to be learned from this hype cycle should not be overlooked – its successes formed the archetypes for machine learning algorithms used today, and its shortcomings indicated the dangers of overenthusiasm in promising fields of research and development. Although the first computers were developed during World War II [1,2], what seemed to truly spark the field of AI was a question proposed by Alan Turing in 1950 [3]: can a machine imitate human intelligence?


A Very Short History Of Artificial Intelligence (AI)

#artificialintelligence

In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: "Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others."