What really is Artificial Intelligence about?

#artificialintelligence

In 2014, Professor Hawking warned that: The development of (true) artificial intelligence could spell the end of the human race, (Cellan-Jones, Technology correspondent, BBC). In a follow up later this month, he is again reported as saying that artificial intelligence could be humanity's greatest disaster


History of artificial intelligence - Wikipedia, the free encyclopedia

#artificialintelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."[1] The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain. The Turing test was proposed by British mathematician Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" The term'Artificial Intelligence' was created at a conference held at Dartmouth College in 1956.[2] Allen Newell, J. C. Shaw, and Herbert A. Simon pioneered the newly created artificial intelligence field with the Logic Theory Machine (1956), and the General Problem Solver in 1957.[3] In 1958, John McCarthy and Marvin Minsky started the MIT Artificial Intelligence lab with 50,000.[4] John McCarthy also created LISP in the summer of 1958, a programming language still important in artificial intelligence research.[5] In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.[6] Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[7] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as J?bir ibn Hayy?n's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[8] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel?apek's



We Almost Gave Up On Building Artificial Brains - The Crux

#artificialintelligence

Today artificial neural networks are making art, writing speeches, identifying faces and even driving cars. It feels as if we're riding the wave of a novel technological era, but the current rise in neural networks is actually a renaissance of sorts. It may be hard to believe, but artificial intelligence researchers were already beginning to see the promise in neural networks during World War II in their mathematical models. But by the 1970s, the field was ready to give up on them entirely. "[T]here were no impressive results until computers grew up, that is until the past 10 years," Patrick Henry Winston, a professor at MIT who specializes in artificial intelligence, says.


Artificial intelligence - Wikipedia, the free encyclopedia

#artificialintelligence

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".[2] As machines become increasingly capable, facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" having become a routine technology.[3] Capabilities still classified as AI include advanced Chess and Go systems and self-driving cars. AI research is divided into subfields[4] that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[5] General intelligence is among the field's long-term goals.[6] Approaches include statistical methods, computational intelligence, soft computing (e.g. machine learning), and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology. The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it."[7] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[8] Attempts to create artificial intelligence has experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973 and the collapse of the Lisp machine market in 1987. In the twenty-first century AI techniques became an essential part of the technology industry, helping to solve many challenging problems in computer science.[9]