Collaborating Authors


Artificial Intelligence


Artificial Intelligence or simply AI is the science of designing intelligent computer programs or machines. AI will change the world as we know it by making everyday tasks easier and more efficient. AI is already created by major developers like IBM but has not nearly reached its full potential. Regardless of the benefits of AI there are many concerns with what the creation of AI can lead to, some as drastic as humanity creating their own uncontrollable superiors to even a third World War. Artificial Intelligence has been an enduring concept since the fifties when Arthur Samuel created the first computer program that taught itself how to play checkers in 1952.

Most Important Milestones in the History of Artificial Intelligence (AI)


The main advances in Artificial Intelligence happened only in the last sixty years. In 1956 John McCarthy coined the term'Artificial Intelligence' when he conducted an academic conference in this field. However, long before that people have been trying to understand whether machines could work and think like human beings. Let's learn the history of the development of ideas in Artificial Intelligence through this infographic. You've heard of Siri, Watson, Tay, Alexa? Artificial intelligence (AI) is driving these and many other technological breakthroughs.

The Rise Of Enterprise AI Adoption


When Alan Turing published his paper titled "Computing Machinery and Intelligence" in 1950, he was trying to answer a simple question -- Can machines think? He introduced the Turing Test in the paper, where a human had to converse with a bunch of people. Among the people, there was also a machine disguised as a human. The goal was to check whether the human would identify the machine or not. Though the test was not definitive, it opened doors to Artificial Intelligence that we see around us now.

What is Artificial Intelligence? How Does AI Work?


Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?" Turing's paper "Computing Machinery and Intelligence" (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence. At its core, AI is the branch of computer science that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines. The expansive goal of artificial intelligence has given rise to many questions and debates.

What is the State-of-the-Art & Future of Artificial Intelligence?


In 1958, the New York Times reported on a demonstration by the US Navy of Frank Rosenblatt's "perceptron" (a rudimentary precursor to today's deep neural networks): "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence". This optimistic take was quickly followed by similar proclamations from AI pioneers, this time about the promise of logic-based "symbolic" AI. In 1960 Herbert Simon declared that, "Machines will be capable, within twenty years, of doing any work that a man can do". The following year, Claude Shannon echoed this prediction: "I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame". And a few years later Marvin Minsky predicted that, "Within a generation...the problems of creating'artificial intelligence' will be substantially solved". John McCarthy promoted the term Artificial Intelligence with a wishful thinking that, 'Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve the kinds of problems now reserved for humans, and improve themselves.' AI was assumed to simulate human reasoning, giving the ability of a computer program to learn and think.

Issac Asimov's 'Foundation' lands on Apple TV September 24


Apple has revealed when you'll get to watch Foundation, its adaptation of Issac Asimov's series of sci-fi novels. The show will debut on Apple TV on September 24th, with additional installments of the first ten-episode season dropping each week. The company also revealed another teaser trailer for Foundation, which stars Jared Harris as the leader of a group of exiles who predicts the end of the Galactic Empire. The group embarks on a journey to restore civilization by establishing The Foundation. Lee Pace also stars in the show, whose showrunner is David S. Goyer (The Dark Knight, Man of Steel). Apple is reducing the free TV trial it offers to customers who buy its devices from a year to three months.

New Alan Turing £50 note enters circulation

BBC News

The Bank is flying the Progress Pride flag above its building in London's Threadneedle Street on Wednesday to recognise improvements since his appalling treatment by the state for being gay. In 2013, he was given a posthumous royal pardon for his 1952 conviction for gross indecency.

The History of Artificial Intelligence


Artificial Intelligence has been developing for many years. This is a great historical perspective documentary video that gives information on the history of A.I. As we progress forward with our journeys in life, the A.I. development will continue to grow in unimaginable ways. If you want to learn some of the historical origins of A.I. this is a good place to begin.

Will AI Make Interpreters and Sign Language Obsolete?


To understand the history of NLP, we have to go back to one of the most ingenious scientists of the modern era: Alan Turing. In 1950, Turing published "Computing Machinery and Intelligence", which discussed the notion of sentient, thinking computers. He claimed that there were no convincing arguments against the idea that machines could think like humans, and proposed the "imitation game", now known as the Turing Test. Turing suggested a way to measure whether or not artificial intelligence can think on its own: if it could correctly fool a human into believing it is a human with a certain probability, it can be thought of as intelligent.