Goto

Collaborating Authors


Tracing The History Of Artificial Intelligence

#artificialintelligence

Earlier this week, I found myself answering a question from a new colleague at Finning International that relates both to the research I do in the iSchool at the University of British Columbia, as well as the analytics, engineering & technology work that I lead at Finning. The questions were simple: 1) What is artificial intelligence? As I sat to reflect last evening, it dawned on me that taking time to craft a clear answer to these questions might be extremely beneficial for many. Analytics, data science, and predictive intelligence are hot topics in many communities and business areas. And yet, despite this interest, few folks I have talked to have a clear understanding of the history of the discipline; one, that frames much of the work currently going on within the space.


History of artificial intelligence - Wikipedia, the free encyclopedia

#artificialintelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."[1] The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956.[2] Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with 50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5] In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.[6] Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as J?bir ibn Hayy?n's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel?apek's


Artificial Intelligence: Structures and Strategies for Complex Problem Solving

AITopics Original Links

Many and long were the conversations between Lord Byron and Shelley to which I was a devout and silent listener. During one of these, various philosophical doctrines were discussed, and among others the nature of the principle of life, and whether there was any probability of its ever being discovered and communicated. They talked of the experiments of Dr. Darwin (I speak not of what the doctor really did or said that he did, but, as more to my purpose, of what was then spoken of as having been done by him), who preserved a piece of vermicelli in a glass case till by some extraordinary means it began to move with a voluntary motion. Not thus, after all, would life be given. Perhaps a corpse would be reanimated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth (Butler 1998).


Artificial Intelligence Narratives: An Objective Perspective on Current Developments

arXiv.org Artificial Intelligence

This work provides a starting point for researchers interested in gaining a deeper understanding of the big picture of artificial intelligence (AI). To this end, a narrative is conveyed that allows the reader to develop an objective view on current developments that is free from false promises that dominate public communication. An essential takeaway for the reader is that AI must be understood as an umbrella term encompassing a plethora of different methods, schools of thought, and their respective historical movements. Consequently, a bottom-up strategy is pursued in which the field of AI is introduced by presenting various aspects that are characteristic of the subject. This paper is structured in three parts: (i) Discussion of current trends revealing false public narratives, (ii) an introduction to the history of AI focusing on recurring patterns and main characteristics, and (iii) a critical discussion on the limitations of current methods in the context of the potential emergence of a strong(er) AI. It should be noted that this work does not cover any of these aspects holistically; rather, the content addressed is a selection made by the author and subject to a didactic strategy.