When IBM's Deep Blue supercomputer won its famous chess rematch with then world champion Garry Kasparov in May 1997, the victory was hailed far and wide as a triumph of artificial intelligence. But John McCarthy – the man who coined the term and pioneered the field of AI research – didn't see it that way. As far back as the mid-60s, chess was called the "Drosophila of artificial intelligence" – a reference to the fruit flies biologists used to uncover the secrets of genetics – and McCarthy believed his successors in AI research had taken the analogy too far. "Computer chess has developed much as genetics might have if the geneticists had concentrated their efforts starting in 1910 on breeding racing Drosophila," McCarthy wrote following Deep Blue's win. "We would have some science, but mainly we would have very fast fruit flies."
Since Alan Turing first posed the question "can machines think?" in his seminal paper in 1950, "Computing Machinery and Intelligence", Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world's best chess player, the birth of autonomous vehicles, and Google's DeepMind beating the world's best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI.
We human beings are the most sophisticated living gadget on this mother earth, We are the most powerful intellectual machine which has it's own intelligence to make decisions, our intellect made sure we ruled over all other living creatures on this planet. We learned to acquire all the skills which was necessary for our survival but once our survival process was ensured we started to explore more, our infinite intelligence which knows no boundaries wanted more. We started to invent tools which will help us save time for ourself and ensure more safety and security, gradually we ventured to invent machines which can be an extension to our intellectual brain and memorise more information and multitask for us .
Alan Turing is often praised as the foremost figure in the historical process that led to the rise of the modern electronic computer. Particular attention has been devoted to the purported connection between a "Universal Turing Machine" (UTM), as introduced in Turing's article of 1936,27 and the design and implementation in the mid-1940s of the first stored-program computers, with particular emphasis on the respective proposals of John von Neumann for the EDVAC30 and of Turing himself for the ACE.26 In some recent accounts, von Neumann's and Turing's proposals (and the machines built on them) are unambiguously described as direct implementations of a UTM, as defined in 1936. "What Turing described in 1936 was not an abstract mathematical notion but a solid three-dimensional machine (containing, as he said, wheels, levers, and paper tape); and the cardinal problem in electronic computing's pioneering years, taken on by both'Proposed Electronic Calculator' and the'First Draft' was just this: How best to build a practical electronic form of the UTM?"9 "[The] essential point of the stored-program computer is that it is built to implement a logical idea, Turing's idea: the universal Turing machine of 1936."18 This statement is of particular interest because, in his authoritative biography21 of Turing (first published 1983), Hodges typically follows a much more nuanced and careful approach to this entire issue. For instance, when referring to a mocking 1936 comment by David Champernowne, a friend of Turing, to the effect that the universal machine would require the Albert Hall to house its construction, Hodges commented that this "was fair comment on Alan's design in'Computable Numbers' for if he had any thoughts of making it a practical proposition they did not show in the paper."21 "Did [Turing] think in terms of constructing a universal machine at this stage? There is not a shred of direct evidence, nor was the design as described in his paper in any way influenced by practical considerations ... My own belief is that the'interest' [in building an actual machine] may have been at the back of his mind all the time after 1936, and quite possibly motivated some of his eagerness to learn about engineering techniques. But as he never said or wrote anything to this effect, the question must be left to tantalize the imagination."21 Discussions of this issue tend to be based on retrospective accounts, sometimes even on hearsay. The most-often quoted one comes from Max Newman, who had been Turing's teacher and mentor back in the early Cambridge days and, later, became a leading figure in the rise of the modern electronic computer, sometimes collaborating with Turing. "The description that [Turing] gave of a'universal' computing machine was entirely theoretical in purpose, but Turing's strong interest in all kinds of practical experiment made him even then interested in the possibility of actually constructing a machine on these lines."6
Donald Michie was born in Rangoon on November 11 1923, the son of James Michie and the former Marjorie Crain. From Rugby he won a classical scholarship to Balliol, becoming - according to wartime colleagues - "curator of the Balliol Book of Bawdy Verse". In 1942 he was recruited to Bletchley Park. He was put into Hut F, working to crack the Wehrmacht's "Tunny" machine, which encoded material more sensitive than that carried by the now celebrated "Enigma". The team's success gave the Allies access for the first time to German army situation reports in the run-up to D-Day, with invaluable insights into troop dispositions in France.
"In the early 1970s, he presented a paper in France on buying and selling by computer, what is now called electronic commerce," said Whitfield Diffie, an Internet security expert who worked as a researcher for Dr. McCarthy at the Stanford Artificial Intelligence Laboratory. And in the study of artificial intelligence, "no one is more influential than John," Mr. Diffie said. While teaching mathematics at Dartmouth in 1956, Dr. McCarthy was the principal organizer of the first Dartmouth Conference on Artificial Intelligence. The idea of simulating human intelligence had been discussed for decades, but the term "artificial intelligence" -- originally used to help raise funds to support the conference -- stuck. In 1958, Dr. McCarthy moved to the Massachusetts Institute of Technology, where, with Marvin Minsky, he founded the Artificial Intelligence Laboratory.
At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it. In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine. Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal -- designing a computing system that would instead "bootstrap" the human intelligence of small groups of scientists and engineers. For the past four decades that basic tension between artificial intelligence and intelligence augmentation -- A.I. versus I.A. -- has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
In 1955 the computer scientist John McCarthy, who has died aged 84, coined the term artificial intelligence, or AI. His pioneering work in AI – which he defined as "the science and engineering of making intelligent machines" – included organising the first Dartmouth conference on artificial intelligence, and developing the programming language Lisp in 1958. This was the second high-level language, after Fortran, and was based on the radical idea of computing using symbolic expressions rather than numbers. It helped spawn a whole AI industry. McCarthy was also the first to propose a time-sharing model of computing.
Marvin Minsky, who has died aged 88, was a pioneer of artificial intelligence. In 1958 he co-founded the Artificial Intelligence Project at the Massachusetts Institute of Technology (MIT). Subsequently known as the AI Lab, it became a mecca for artificial intelligence research. His published works included Steps Toward Artificial Intelligence (1960), a manifesto that profoundly shaped AI in its earliest days, and Society of Mind (1985), which postulated that the brain is fundamentally an assembly of interacting, specialised, autonomous agents for tasks such as visual processing and knowledge management. That view of the architecture of the mind remains a cornerstone of AI research.
Oliver G. Selfridge, an innovator in early computer science and artificial intelligence, died on Wednesday in Boston. The cause was injuries suffered in a fall on Sunday at his home in nearby Belmont, Mass., said his companion, Edwina L. Rissland. Credited with coining the term "intelligent agents," for software programs capable of observing and responding to changes in their environment, Mr. Selfridge theorized about far more, including devices that would not only automate certain tasks but also learn through practice how to perform them better, faster and more cheaply. Eventually, he said, machines would be able to analyze operator instructions to discern not just what users requested but what they actually wanted to occur, not always the same thing. His 1958 paper "Pandemonium: A Paradigm for Learning," which proposed a collection of small components dubbed "demons" that together would allow machines to recognize patterns, was a landmark contribution to the emerging science of machine learning.