If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A short history lesson on AIThe holy grail of AI research is to create Artificial General Intelligence (AGI). AGI is what we will get once we get a machine to operate on the same level of intelligence as a human.There have been countless theories and approaches over the years trying to achieve this goal, but, so far, none of them have succeeded.The first wave of theories around the 50s and the 60s, was focused on using logical rules in order to represent knowledge and make machines reason like humans. This was based on the assumption that the property that makes humans special is the ability to reason in a logical manner. For example, humans can easily understand logical statements of the form"If all men are mortal and Socrates is a man, then Socrates is a mortal".This approach is now termed "Good Old-Fashioned AI". It was invented by researchers like John McCarthy and Marvin Minsky.
For decades, writers and filmmakers have dreamt of the AI revolution. Whether it's the evil HAL 9000 from 2001: A Space Odyssey or the sentient droids of Star Wars, complex artificial intelligence is a hallmark in the depiction of advanced futuristic civilizations. But, in reality, how close are we to an AI revolution? When will we be able to reap the rewards of computerized thought? Will we ever see the intelligence that matches our wildest imagination in science-fiction?
In computer science,artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. The term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive"(process of thinking) functions that humans associate with the human mind, such as "learning" and "problem solving". What is, and what isn't AI? The popularity of AI in the media is in part due to the fact that people have started using the term when they refer to things that used to be called by other names. You can see almost anything from statistics and business analytics to manually encoded if-then rules called AI. Note- Suitcase Words Marvin Minsky, a cognitive scientist and one of the greatest pioneers in AI, coined the term suitcase word for terms that carry a whole bunch of different meanings that come along even if we intend only one of them.
This article is part of "the philosophy of artificial intelligence," a series of posts that explore the ethical, moral, and social implications of AI today and in the future Would true artificial intelligence be conscious and experience the world like us? Will we lose our humanity if we install AI implants in our brains? Should robots and humans have equal rights? If I replicate an AI version of myself, who will be the real me? These are the kind of questions we think of when watching science fiction movies and TV series such as Her, Westworld, and Ex Machina.
AGI, Artificial General Intelligence, is the dream of some researchers -- and the nightmare of the rest of us. While AGI will never be able to do more than simulate some aspects of human behavior, its gaps will be more frightening than its capabilities. Will humans be interacting with seemingly intelligent robots in ten years? Yes, and we already are. Will robots be ubiquitous in our lives, with human-like abilities such as emotions, unsupervised learning?
If you had asked me a year or two ago when Artificial General Intelligence (AGI) would be invented, I'd have told you that we were a long way off. Most experts were saying that AGI was decades away, and some were saying it might not happen at all. The consensus is -- was? -- that all the recent progress in AI concerns so-called "narrow AI," meaning systems that can only perform one specific task. An AGI, or a "strong AI," which could perform any task as well as a human being, is a much harder problem. It is so hard that there isn't a clear roadmap for achieving it, and few researchers are openly working on the topic. GPT-3 is the first model to shake that status-quo seriously. GPT-3 is the latest language model from the OpenAI team.
Editor's note: The Towards Data Science podcast's "Climbing the Data Science Ladder" series is hosted by Jeremie Harris. Jeremie helps run a data science mentorship startup called SharpestMinds. Most machine learning models are used in roughly the same way: they take a complex, high-dimensional input (like a data table, an image, or a body of text) and return something very simple (a classification or regression output, or a set of cluster centroids). That makes machine learning ideal for automating repetitive tasks that might historically have been carried out only by humans. But this strategy may not be the most exciting application of machine learning in the future: increasingly, researchers and even industry players are experimenting with generative models, that produce much more complex outputs like images and text from scratch. These models are effectively carrying out a creative process -- and mastering that process hugely widens the scope of what can be accomplished by machines.
Artificial Intelligence is an umbrella term used to describe a rapidly evolving, highly competitive technological field. It is often used erroneously and has come to define so many different approaches that even some experts are not able to define, in plain terms, exactly what artificial intelligence is. This makes the rapidly growing field of AI tricky to navigate and even more difficult to regulate properly. The point of regulation should be to protect people from physical, mental, environmental, social, or financial harm caused by the actions or negligence of others. For this article, let's stick to the general point described above).
The unprecedented progress in Artificial Intelligence (AI) [1-6], over the last decade, came alongside of multiple AI failures [7, 8] and cases of dual use  causing a realization  that it is not sufficient to create highly capable machines, but that it is even more important to make sure that intelligent machines are beneficial  for the humanity. This lead to the birth of the new subfield of research commonly known as AI Safety and Security  with hundreds of papers and books published annually on different aspects of the problem [13-31]. All such research is done under the assumption that the problem of controlling highly capable intelligent machines is solvable, which has not been established by any rigorous means. However, it is a standard practice in computer science to first show that a problem doesn't belong to a class of unsolvable problems [32, 33] before investing resources into trying to solve it or deciding what approaches to try. Unfortunately, to the best of our knowledge no mathematical proof or even rigorous argumentation has been published demonstrating that the AI control problem may be solvable, even in principle, much less in practice. Or as Gans puts it citing Bostrom: "Thusfar, AI researchers and philosophers have not been able to come up with methods of control that would ensure [bad] outcomes did not take place …" .
The ultimate vision of artificial intelligence are systems that can handle the wide range of cognitive tasks that humans can. The idea of a single, general intelligence is referred to as Artificial General Intelligence (AGI), which encopmasses the idea of a single, generally intelligent system that can act and think much like humans. However, we have not yet achieved this concept of the generally intelligent system and as such, current AI applications are only capable of narrow applications of AI such as recognition systems, hyperpersonaliztion tools and recommendation systems, and even autonomous vehicles. This raises the question: Is AGI really around the corner, or are we chasing an elusive goal that we may never realize? Dr. Ben Goertzel CEO & Founder of the SingularityNET Foundation is particularly visible and vocal on his thoughts on Artificial Intelligence, AGI, and where research and industry are in regards to AGI. Speaking at the (Virtual) OpenCogCon event this week, Dr. Goertzel is one of the world's foremost experts in Artificial General Intelligence.