If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
At the heart of the discipline of artificial intelligence is the idea that one day we'll be able to build a machine that's as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, we've built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power. But despite the centrality of this idea to the field of AI, there's little agreement among researchers as to when this feat might actually be achievable.
In recent years, Elon Musk has become one of the most vocal critics of artificial intelligence, issuing numerous warnings about the threat that powerful machines pose to the future of mankind. Now the 47-year-old billionaire inventor and Tesla chief executive has unveiled a potential way for the meager human brain to compete with a superior force that Musk has compared to "an immortal dictator" and "the devil." During an interview with Axios co-founders Jim VandeHei and Mike Allen that aired Sunday, Musk said humans must merge with artificial intelligence, creating a "symbiosis" that leads to "a democratization of intelligence." "Essentially, how do we ensure that the future constitutes the sum of the will of humanity?" "And so, if we have billions of people with the high-bandwidth link to the AI extension of themselves, it would actually make everyone hyper-smart."
Many people think of artificial intelligence (AI) as a completely automated process with no human input, but much of the data used by AI systems and many of the ways these systems are deployed are reliant on human input. In fact, despite fears that AI may replace human beings in the digital workplace, it is more likely that humans and machines will work together. People and machines are entering a new era of learning in which AI augments ordinary intelligence and helps people realize their full potential, according to Deepak Agarwal who heads machine learning and AI at LinkedIn. Take the example of profile data, he said. At a fundamental level, almost all of LinkedIn's member data is generated by members themselves.
The world has been through multiple'AI winters' (a time when the perception of artificial intelligence as a solution collapses and funding is withdrawn from major projects) since the tech's early days in the mid-20th century. However, the last such period, in the 1990s, is long since over, and AI is back in vogue once again, with ever-rising numbers of people working to prove that machines can simulate human learning. One of AI′s early movers and shakers was Marvin Minsky, whose work includes the first randomly wired neural network learning machine, which he built in 1951. In 1967 Minsky predicted that "within a generation... the problem of creating'artificial intelligence' will substantially be solved." He was wrong about that.
This (currently) five part feature should provide you with a very basic understanding of what AI is, what it can do, and how it works. The guide contains articles on (in order published) neural networks, computer vision, natural language processing, algorithms, and artificial general intelligence. There are few technologies that inspire the imagination like artificial intelligence. And, in the field of AI, the Holy Grail is living machines. The quest to imbue machines with the spark of life is an ancient one.
"Intelligence amplification (IA) (also referred to as cognitive augmentation and machine augmented intelligence) refers to the effective use of information technology in augmenting human intelligence. The idea was first proposed in the 1950s and 1960s. IA is sometimes contrasted with AI (artificial intelligence), that is, the project of building a human-like intelligence in the form of an autonomous technological system such as a computer or robot. AI has encountered many fundamental obstacles, practical as well as theoretical, which for IA seem moot, as it needs technology merely as an extra support for an autonomous intelligence that has already proven to function. Moreover, IA has a long history of success, since all forms of information technology, from the abacus to writing to the Internet, have been developed basically to extend the information processing capabilities of the human mind."
The overarching problem in artificial intelligence (AI) is that we do not understand the intelligence process well enough to enable the development of adequate computational models. Much work has been done in AI over the years at lower levels, but a big part of what has been missing involves the high level, abstract, general nature of intelligence. We address this gap by developing a model for general intelligence. To accomplish this, we focus on three basic aspects of intelligence. First, we must realize the general order and nature of intelligence at a high level. Second, we must come to know what these realizations mean with respect to the overall intelligence process. Third, we must describe these realizations as clearly as possible. We propose a hierarchical model to help capture and exploit the order within intelligence. The underlying order involves patterns of signals that become organized, stored and activated in space and time. These patterns can be described using a simple, general hierarchy, with physical signals at the lowest level, information in the middle, and abstract signal representations at the top. This high level perspective provides a big picture that literally helps us see the intelligence process, thereby enabling fundamental realizations, a better understanding and clear descriptions of the intelligence process. The resulting model can be used to support all kinds of information processing across multiple levels of abstraction. As computer technology improves, and as cooperation increases between humans and computers, people will become more efficient and more productive in performing their information processing tasks.
AI is a large topic, and there is no single agreed definition of what it involves. But there seems to be more agreement than disagreement. Broadly speaking, AI is an umbrella term for the field in computer science dedicated to making machines simulate different aspects of human intelligence, including learning, decision-making and pattern recognition. Some of the most striking applications, in fields like speech recognition and computer vision, are things people take for granted when assessing human intelligence but have been beyond the limits of computers until relatively recently. The term "artificial intelligence" was coined in 1956 by mathematics professor John McCarthy, who wrote, The study is to proceed on the basis of the conjecture that every aspect of learning and any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
Artificial and pervasive intelligence are paving the way for the transition into Industry 4.0 and are likely to have a lasting impact on the manufacturing sector. Internet of Things (IoT) devices, mobility, and cloud services have given rise to smart machines. Medical devices such as pacemakers, smartphones and tablets, security systems, and manufacturing equipment on the factory floor are only some examples of technologies which are becoming linked to Wi-Fi and the cloud, and this shift towards connectivity is a major element of Industry 4.0. Industry 4.0 is the transition from traditional manufacturing processes and equipment to smart devices, IoT, machine-to-machine (M2M) technologies and data analytics. While a shift towards modern solutions -- when implemented properly -- can result in better visibility on the factory floor and in supply chains, a boost in revenue and an uptick in efficiency, artificial intelligence (AI) may have the potential to push Industry 4.0 even further forward.
Human intelligence precedes civilization; artificial and superhuman intelligences, however, will redefine it. Current research in artificial general intelligence (AGI) and intelligence enhancement (IE) seek to remove human error from their most ambitious technological quests. On the one hand, using evolutionary algorithms, AGI aims to develop a fully automated, increasingly independent, gradually cognitive, and eventually conscious artificial being. On the other hand, using neurotechnology, IE intends to create a super-intelligent and inherently different human being capable to counteract the inexorable ascension of machines in the next few years. But what is the limit of such scientific enterprises?