If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
When artificial intelligence (AI) gets added to the mix, the results are explosive. I was about ten years old when I stumbled across a book on artificial intelligence (AI). The profound implications sunk in. I've been passionate about ever it since. I spent nearly twenty years as a professional AI researcher.
If you have a child in your life who is headed in the "IT" / "Computers" (or honestly any job that will come in five years), please consider this valuable guidance. I can't cover AI without at least touching on this thought: AI is going to wipe out humanity. To a super intelligence that is in control of the planet, we will appear to be the equivalent of how ants appear to us – mostly pests who get in the way, even as we express passing appreciation for their "primitive intelligence." It is important to understand two incredibly valuable definitions that I've paraphrased from the brilliant Yuval Noah Harari. People who express the thought above mix the two things.
The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz's dream of a universal "concept language," and the ancient logical system of Aristotle. Dixon goes on to describe the creation of Boolean logic (which has only two variables: TRUE and FALSE, represented as 1 and 0 respectively), and the insight by Claude E. Shannon that those two variables could be represented by a circuit, which itself has only two states: open and closed.1 Dixon writes: Another way to characterize Shannon's achievement is that he was first to distinguish between the logical and the physical layer of computers. Dixon is being modest: the distinction may be obvious to computer scientists, but it is precisely the clear articulation of said distinction that undergirds Dixon's remarkable essay; obviously "computers" as popularly conceptualized were not invented by Aristotle, but he created the means by which they would work (or, more accurately, set humanity down that path).
These are three terms that are heard all the time now, but often people still get confused about what each one really entails. Below is a quick rundown of each that will hopefully things out a little and give you a real insight as to what these interchangeable terms mean. Artificial Intelligence, or AI for short, is the broadest way in which to describe computer intelligence. Back in1956 it was described as "Every aspect of learning or any other feature of intelligence can in principle is that a machine can be made to simulate it" at the Dartmouth Artificial Intelligence Conference. AI can come in various forms including game-playing computer programs and voice recognition systems.
Summary: Looking beyond today's commercial applications of AI, where and how far will we progress toward an Artificial Intelligence with truly human-like reasoning and capability? This is about the pursuit of Artificial General Intelligence (AGI). There is no question that we're making a lot of progress in artificial intelligence (AI). So much so that we are rapidly approaching or have already arrived at a plateau in development where more effort is being put into commercializing existing AI capabilities than in improving it. As far back as November 2014 Kevin Kelly, cofounder of Wired magazine and prolific futurist observed "The business plans of the next 10,000 startups are easy to forecast: Take X and add AI." Well Kevin, you're right.
Recently, I asked a number of AI researchers this question. The responses i received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.
For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation.
How worried should we be about artificial intelligence? Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications. Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that "artificial intelligence" is an ambiguous term.
With huge strides in AI--from advances in the driverless vehicle realm, to mastering games such as poker and Go, to automating customer service interactions--this advanced technology is poised to revolutionize businesses. But the terms AI, machine learning, and deep learning are often used haphazardly and interchangeably, when there are key differences between each type of technology. Here's a guide to the differences between these three tools to help you master machine intelligence. SEE: Inside Amazon's clickworker platform: How half a million people are being paid pennies to train AI (PDF download) (TechRepublic) AI is the broadest way to think about advanced, computer intelligence. In 1956 at the Dartmouth Artificial Intelligence Conference, the technology was described as such: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."