We human beings are the most sophisticated living gadget on this mother earth, We are the most powerful intellectual machine which has it's own intelligence to make decisions, our intellect made sure we ruled over all other living creatures on this planet. We learned to acquire all the skills which was necessary for our survival but once our survival process was ensured we started to explore more, our infinite intelligence which knows no boundaries wanted more. We started to invent tools which will help us save time for ourself and ensure more safety and security, gradually we ventured to invent machines which can be an extension to our intellectual brain and memorise more information and multitask for us .
Artificial intelligence is not fiction. It is already our reality and has been for a very long time. Its origins date back to the 1950s. How has it changed since then? Are computers able to teach themselves? What does a smartphone have to do with the landing on the moon, and could machines be smarter than humans?
Alan Turing is often praised as the foremost figure in the historical process that led to the rise of the modern electronic computer. Particular attention has been devoted to the purported connection between a "Universal Turing Machine" (UTM), as introduced in Turing's article of 1936,27 and the design and implementation in the mid-1940s of the first stored-program computers, with particular emphasis on the respective proposals of John von Neumann for the EDVAC30 and of Turing himself for the ACE.26 In some recent accounts, von Neumann's and Turing's proposals (and the machines built on them) are unambiguously described as direct implementations of a UTM, as defined in 1936. "What Turing described in 1936 was not an abstract mathematical notion but a solid three-dimensional machine (containing, as he said, wheels, levers, and paper tape); and the cardinal problem in electronic computing's pioneering years, taken on by both'Proposed Electronic Calculator' and the'First Draft' was just this: How best to build a practical electronic form of the UTM?"9 "[The] essential point of the stored-program computer is that it is built to implement a logical idea, Turing's idea: the universal Turing machine of 1936."18 This statement is of particular interest because, in his authoritative biography21 of Turing (first published 1983), Hodges typically follows a much more nuanced and careful approach to this entire issue. For instance, when referring to a mocking 1936 comment by David Champernowne, a friend of Turing, to the effect that the universal machine would require the Albert Hall to house its construction, Hodges commented that this "was fair comment on Alan's design in'Computable Numbers' for if he had any thoughts of making it a practical proposition they did not show in the paper."21 "Did [Turing] think in terms of constructing a universal machine at this stage? There is not a shred of direct evidence, nor was the design as described in his paper in any way influenced by practical considerations ... My own belief is that the'interest' [in building an actual machine] may have been at the back of his mind all the time after 1936, and quite possibly motivated some of his eagerness to learn about engineering techniques. But as he never said or wrote anything to this effect, the question must be left to tantalize the imagination."21 Discussions of this issue tend to be based on retrospective accounts, sometimes even on hearsay. The most-often quoted one comes from Max Newman, who had been Turing's teacher and mentor back in the early Cambridge days and, later, became a leading figure in the rise of the modern electronic computer, sometimes collaborating with Turing. "The description that [Turing] gave of a'universal' computing machine was entirely theoretical in purpose, but Turing's strong interest in all kinds of practical experiment made him even then interested in the possibility of actually constructing a machine on these lines."6
John McCarthy, the inventor of programming language Lisp and a pioneer in "artificial intelligence" technology, died Monday night. Mashable reports that McCarthy was also one of the first people to propose "selling computing power through a utility business model," in 1961. While the idea didn't gain much traction at the time, it's now coming back in a big way with the use of grid and cloud computing. Tributes to McCarthy poured in Tuesday, some from posters on Usenet, where McCarthy was an active presence, or from technology writers like Steven Levy, who wrote on Twitter: "Broke news to Siri that John McCarthy... died. She took it well but we humans will miss him."
At one laboratory, a small group of scientists and engineers worked to replace the human mind, while at the other, a similar group worked to augment it. In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine. Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal -- designing a computing system that would instead "bootstrap" the human intelligence of small groups of scientists and engineers. For the past four decades that basic tension between artificial intelligence and intelligence augmentation -- A.I. versus I.A. -- has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world.
In 1955 the computer scientist John McCarthy, who has died aged 84, coined the term artificial intelligence, or AI. His pioneering work in AI – which he defined as "the science and engineering of making intelligent machines" – included organising the first Dartmouth conference on artificial intelligence, and developing the programming language Lisp in 1958. This was the second high-level language, after Fortran, and was based on the radical idea of computing using symbolic expressions rather than numbers. It helped spawn a whole AI industry. McCarthy was also the first to propose a time-sharing model of computing.
Marvin Minsky, who has died aged 88, was a pioneer of artificial intelligence. In 1958 he co-founded the Artificial Intelligence Project at the Massachusetts Institute of Technology (MIT). Subsequently known as the AI Lab, it became a mecca for artificial intelligence research. His published works included Steps Toward Artificial Intelligence (1960), a manifesto that profoundly shaped AI in its earliest days, and Society of Mind (1985), which postulated that the brain is fundamentally an assembly of interacting, specialised, autonomous agents for tasks such as visual processing and knowledge management. That view of the architecture of the mind remains a cornerstone of AI research.
Computer pioneer and artificial intelligence (AI) theorist Alan Turing would have been 100 years old this Saturday. To mark the anniversary the BBC has commissioned a series of essays. In this, the fourth article, his influence on AI research and the resulting controversy are explored. Alan Turing was clearly a man ahead of his time. In 1950, at the dawn of computing, he was already grappling with the question: "Can machines think?"
This past October saw the death of John McCarthy, one of the pioneers of computer science and a founder of the field of artificial intelligence (AI), a phrase he is credited with inventing. It capped a sad month that also saw the passing of Apple cofounder Steve Jobs and of Dennis Ritchie, the coinventor of Unix and the C programming language. John McCarthy was born in Boston in 1927, but he grew up near Caltech, where he got his B.S. in mathematics. He detoured to Princeton for his Ph.D. but ended up at MIT, where he cofounded its artificial-intelligence lab, the world's first, before going on to Stanford in 1962 to found its artificial-intelligence lab. In between, he found time to invent Lisp, one of the most influential programming languages ever created.