Artificial Intelligence began as a philosophical conundrum in ancient times, developed into a science fiction forecast (and warning) in the Modern Era and is a practical reality today. This shows that from the earliest known period of human history to the present day it has been a subject of interest to some of the brightest minds and powerful personalities. Here's a run-down of some of the most insightful, important or accurate things which have been said: Alan Turing was a pioneer in bringing AI from the realm of philosophical prediction to reality. He realized in the 1950s it would need greater understanding of human intelligence before we could hope to build machines which would "think" like us. "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
To start implementing AI, you should have the basic knowledge of traditional algorithms and concepts. Artificial intelligence has been a thrill for the world's minds for decades. The quest for the creation of an artificial brain was inspired by the natural processes of the human brain. AI prototyping was represented in multiple science fiction books and movies. Gradually, the idea turned into a scientific concept and triggered the creation of practical intelligent technologies.
Artificial intelligence research is still in its infancy, at least as compared to computer science in general, but the concept of unlimited computing resources is accelerating the field. As someone with nearly unlimited computing resources at his disposal, this is something Swami Sivasubramanian, vice president of AI at Amazon Web Services, is watching play out. Last week Sivasubramanian walked GeekWire Cloud Tech Summit attendees through the array of artificial intelligence and machine-learning services that his team has developed for AWS customers and Amazon's own internal services as well. If you've been through a few tech cycles, you've already heard a lot about artificial intelligence. Much has been promised from this research field over several decades, but the enormous amount of data now moving into cloud computing services like AWS and others allows researchers like Sivasubramanian to make real breakthroughs that weren't possible when data sets were scattered and siloed.
An important form of learning involves acquiring skills that let an agent achieve its goals. While there has been considerable work on learning in planning, most approaches have been sensitive to the representation of domain context, which hurts their generality. A learning mechanism that constructs skills effectively across different representations would suggest more robust behavior. In this paper, we present a novel approach to learning hierarchical task networks that acquires conceptual predicates as learning proceeds, making it less dependent on carefully crafted background knowledge. The representation acquisition procedure expands the system's knowledge about the world, and leads to more rapid learning. We show the effectiveness of the approach by comparing it with one that doesnot change domain representation.
Cognitive Science (CogSci) and AI are addressed from the perspective of inductive inference research, specifically as applied to language learning. Language so represents intelligence that results bridge gaps between the fields. We give examples of rigorous results intractable for AI machines and humans; AI results humans find satisfactory; and AI-Hard problems with "good enough" solutions, adaptively obtained. We conclude that lack of the human experience may preclude machines from human thinking, but CogSci can help AI produce human-acceptable results. Conversely, CogSci can benefit when researchers study human processes to improve AI machines. We see no competition: cooperation will advance both fields.