If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
This is a Q&A excerpt on the topic of AI from a lecture by Richard Feynman from September 26th, 1985. This is a clip on the Lex Clips channel that I mostly use to post video clips from the Artificial Intelligence podcast, but occasionally I post favorite clips from lectures given by others. Hope you find these interesting, thought-provoking, and inspiring. If you do, please subscribe, click bell icon, and share! Artificial Intelligence podcast website: https://lexfridman.com/ai
Our idea is to evaluate each area step by step. As long as each feature is designed to look like it is part of the same body (same gender, age and so on), then if an eye and mouth can individually pass the test then they should also pass it together. This would allow a robot builder to assess progress as they go to ensure each body part is indistinguishable from a that of human and to prevent ending up with something that falls into the uncanny valley.
Computers have already taken over many things that used to be done by people. But how far can this go and is it a good thing? This talk will deal with a little history of how computing and psychology have developed together as well as with what's happening now. We'll end with a discussion of what might happen in the future and what that may mean for how we live our lives.
The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. In 1950, at the dawn of the digital age, Alan Turing published what was to be become his most well-known article, "Computing Machinery and Intelligence," in which he poses the question, "Can machines think?"
Here are the slides from my York Festival of Ideas keynote yesterday, which introduced the festival focus day Artificial Intelligence: Promises and Perils. I start the keynote with Alan Turing's famous question: Can a Machine Think? and explain that thinking is not just the conscious reflection of Rodin's Thinker but also the largely unconscious thinking required to make a pot of tea. I note that at the dawn of AI 60 years ago we believed the former kind of thinking would be really difficult to emulate artificially and the latter easy. In fact it has turned out to be the other way round: we've had computers that can expertly play chess for 20 years, but we can't yet build a robot that could go into your kitchen and make you a cup of tea. In slides 5 and 6 I suggest that we all assume a cat is smarter than a crocodile, which is smarter than a cockroach, on a linear scale of intelligence from not very intelligent to human intelligence.
Artificial intelligence (AI) is seen as both a boon and a threat. It uses our personal data to influence our lives without us realising it. It is used by social media to draw our attention to things we are interested in buying, and by our tablets and computers to predict what we want to type (good). It facilitates targeting of voters to influence elections (bad, particularly if your side loses). Perhaps the truth or otherwise of allegations such as electoral interference should be regarded in the light of the interests of their promoters.
The vast increase in speed, memory capacity, and communications ability allows today's computers to do things that were unthinkable when I started programming six decades ago. Then, computers were primarily used for numerical calculations; today, they process text, images, and sound recordings. Then, it was an accomplishment to write a program that played chess badly but correctly. Today's computers have the power to compete with the best human players. The incredible capacity of today's computing systems allows some purveyors to describe them as having "artificial intelligence" (AI). They claim that AI is used in washing machines, the "personal assistants" in our mobile devices, self-driving cars, and the giant computers that beat human champions at complex games. Remarkably, those who use the term "artificial intelligence" have not defined that term. I first heard the term more than 50 years ago and have yet to hear a scientific definition. Even now, some AI experts say that defining AI is a difficult (and important) question--one that they are working on. "Artificial intelligence" remains a buzzword, a word that many think they understand but nobody can define. Application of AI methods can lead to devices and systems that are untrustworthy and sometimes dangerous.
Futurist Maurice Conti says we've entered a new era where machines and humans partner to do what neither can do alone. He calls it the "Augmented Age." Maurice Conti is a designer and innovator. Currently, he is the Director of Applied Research and Innovation at Autodesk -- a 3-D design and engineering software company. His research focuses on how future innovations and advanced robotics will help make our world a better place.
About Austin: Austin is the CEO and co-founder of Yhat, Inc. He was previously at OnDeck Capital, the largest online small business lender in the United States. Alan Turing posed this question at the outset of his 1950 paper "Computing Machinery and Intelligence," a seminal piece of literature in the field of artificial intelligence. Turing wanted to know whether computers would eventually imitate humans' responses so well that people wouldn't be able to tell whether they were interacting with a human or a machine. Decades later, in an era when computers are capable of recognizing and responding to human speech, processing images and even driving cars, the question for data scientists and engineers has become "What else can machines think about?"
On June 7, 2014, a Turing-Test competition, organized by the University of Reading to mark the 60th anniversary of Alan Turing's death, was won by a Russian chatterbot pretending to be a Russian teenage boy named Eugene Goostman, which was able to convince one-third of the judges that it was human. The media was abuzz, claiming a machine has finally been able to pass the Turing Test. The test was proposed by Turing in his 1950 paper, "Computing Machinery and Intelligence," in which he considered the question, "Can machines think?" In order to avoid the philosophical conundrum of having to define "think," Turing proposed an "Imitation Game," in which a machine, communicating with a human interrogator via a "teleprinter," attempts to convince the interrogator that it (the machine) is human. Turing predicted that by the year 2000 it would be possible to fool an average interrogator with probability of at least 30%.