The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods." The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956. Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957. In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with 50,000. John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons. Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea. In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as J?bir ibn Hayy?n's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem. By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel?apek's
An executive guide to the technology and market drivers behind the $135 billion robotics market. Artist Stephanie Dinkins tells a fascinating story about her work with an AI robot made to look like an African-American woman and at times sensing some type of consciousness in the machine. She was speaking at the de Young Museum's Thinking Machines conversation series, along with anthropologist Tobias Rees, Director of Transformation with the Humans Program at the American Institute, Dinkins is Associate Professor of Art at Stony Brook University and her work includes teaching communities about AI and algorithms, and trying to answer questions such as: Can a community trust AI systems they did not create? She has worked with pre-college students in poor neighborhoods in Brooklyn and taught them how to create AI chat bots. They made a chat bot that told "Yo Mamma" jokes - which she said was a success because it showed how AI can be made to reflect local traditions.
Ethics and safety research in artificial intelligence is increasingly framed in terms of "alignment" with human values and interests. I argue that Turing's call for "fair play for machines" is an early and often overlooked contribution to the alignment literature. Turing's appeal to fair play suggests a need to correct human behavior to accommodate our machines, a surprising inversion of how value alignment is treated today. Reflections on "fair play" motivate a novel interpretation of Turing's notorious "imitation game" as a condition not of intelligence but instead of value alignment: a machine demonstrates a minimal degree of alignment (with the norms of conversation, for instance) when it can go undetected when interrogated by a human. I carefully distinguish this interpretation from the Moral Turing Test, which is not motivated by a principle of fair play, but instead depends on imitation of human moral behavior. Finally, I consider how the framework of fair play can be used to situate the debate over robot rights within the alignment literature. I argue that extending rights to service robots operating in public spaces is "fair" in precisely the sense that it encourages an alignment of interests between humans and machines.
Inspite of all the current hype, AI is not a new field of study, but it has its ground in the fifties. If we exclude the pure philosophical reasoning path that goes from the Ancient Greek to Hobbes, Leibniz, and Pascal, AI as we know it has been officially started in 1956 at Dartmouth College, where the most eminent experts gathered to brainstorm on intelligence simulation. This happened only a few years after Asimov set his own three laws of robotics, but more relevantly after the famous paper published by Turing (1950), where he proposes for the first time the idea of a thinking machine and the more popular Turing test to assess whether such machine shows, in fact, any intelligence. As soon as the research group at Dartmouth publicly released the contents and ideas arisen from that summer meeting, a flow of government funding was reserved for the study of creating a nonbiological intelligence. Atthat time, AI seemed to be easily reachable, but it turned out that was not the case.
E.A. Feigenbaum and J. Feldman (Eds.). Computers and Thought. McGraw-Hill, 1963. This collection includes twenty classic papers by such pioneers as A. M. Turing and Marvin Minsky who were behind the pivotal advances in artificially simulating human thought processes with computers. All Parts are available as downloadable pdf files; most individual chapters are also available separately. COMPUTING MACHINERY AND INTELLIGENCE. A. M. Turing. CHESS-PLAYING PROGRAMS AND THE PROBLEM OF COMPLEXITY. Allen Newell, J.C. Shaw and H.A. Simon. SOME STUDIES IN MACHINE LEARNING USING THE GAME OF CHECKERS. A. L. Samuel. EMPIRICAL EXPLORATIONS WITH THE LOGIC THEORY MACHINE: A CASE STUDY IN HEURISTICS. Allen Newell J.C. Shaw and H.A. Simon. REALIZATION OF A GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter. EMPIRICAL EXPLORATIONS OF THE GEOMETRY-THEOREM PROVING MACHINE. H. Gelernter, J.R. Hansen, and D. W. Loveland. SUMMARY OF A HEURISTIC LINE BALANCING PROCEDURE. Fred M. Tonge. A HEURISTIC PROGRAM THAT SOLVES SYMBOLIC INTEGRATION PROBLEMS IN FRESHMAN CALCULUS. James R. Slagle. BASEBALL: AN AUTOMATIC QUESTION ANSWERER. Green, Bert F. Jr., Alice K. Wolf, Carol Chomsky, and Kenneth Laughery. INFERENTIAL MEMORY AS THE BASIS OF MACHINES WHICH UNDERSTAND NATURAL LANGUAGE. Robert K. Lindsay. PATTERN RECOGNITION BY MACHINE. Oliver G. Selfridge and Ulric Neisser. A PATTERN-RECOGNITION PROGRAM THAT GENERATES, EVALUATES, AND ADJUSTS ITS OWN OPERATORS. Leonard Uhr and Charles Vossler. GPS, A PROGRAM THAT SIMULATES HUMAN THOUGHT. Allen Newell and H.A. Simon. THE SIMULATION OF VERBAL LEARNING BEHAVIOR. Edward A. Feigenbaum. PROGRAMMING A MODEL OF HUMAN CONCEPT FORMULATION. Earl B. Hunt and Carl I. Hovland. SIMULATION OF BEHAVIOR IN THE BINARY CHOICE EXPERIMENT Julian Feldman. A MODEL OF THE TRUST INVESTMENT PROCESS. Geoffrey P. E. Clarkson. A COMPUTER MODEL OF ELEMENTARY SOCIAL BEHAVIOR. John T. Gullahorn and Jeanne E. Gullahorn. TOWARD INTELLIGENT MACHINES. Paul Armer. STEPS TOWARD ARTIFICIAL INTELLIGENCE. Marvin Minsky. A SELECTED DESCRIPTOR-INDEXED BIBLIOGRAPHY TO THE LITERATURE ON ARTIFICIAL INTELLIGENCE. Marvin Minsky.