If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
"By studying the problem-solving techniques that people use to design algorithms we can learn something about building systems that automatically derive algorithms or assist human designers. In this paper we present a model of algorithm design based on our analysis of the protocols of two subjects designing three convex hull algorithms. The subjects work mainly in a data-flow problem space in which the objects are representations of partially specified algorithms. A small number of general-purpose operators construct and modify the representations; these operators are adapted to the current problem state by means-ends analysis. The problem space also includes knowledge-rich schemas such as divide and conquer that subjects incorporate into their algorithms. A particularly versatile problem-solving method in this problem space is symbolic execution, which can be used to refine, verify, or explain components of an algorithm. The subjects also work in a task-domain space about geometry. The interplay between problem solving in the two spaces makes possible the process of discovery. We have observed that the time a subject takes to design an algorithm is proportional to the number of components in the algorithm's data-flow representation. Finally, the details of the problem spaces provide a model for building a robust automated system." Information Processing and Management 20(l-2):97-118
Research in artificial intelligence at the University of Texas at Austin is diverse. It is spread across many departments(Computer Science, Mathematics, the Institute for Computer Science and Computer Applications, and the Linguistics Research Center) and it covers most of the major subareas with AI (natural language, theorem proving, knowledge representation, languages for AI, and applications). Related work is also being done in several other departments, including EE (low-level vision), Psychology, Linguistics, and the Center for Cognitive Science.
The Stanford Artificial Intelligence Project, later known as the Stanford AI Lab or SAIL, was created by Prof. John McCarthy shortly after his arrival at Stanford on 1962. As a faculty member in the Computer Science Division of the Mathematics Department, McCarthy began supervising research in artificial intelligence and timesharing systems with a few students. From this small start, McCarthy built a large and active research organization involving many other faculty and research projects as well as his own. Nevertheless, there are some important dimensions to the research that took place in the AI Lab that will try to put in historical context in this brief introduction.
Too few people are doing basic research in AI relative to the number working on applications. The ratio of basic/applied is less in AI than in the older sciences and than in computer science generally. This is unfortunate, because reaching human level artificial intelligence will require fundamental conceptual advances.
In keeping with a desire to abstract general principles in AI, this article begins to examine some relationships among heuristic learning in search, classification of utility, properties of certain structures, measurement of acquired knowledge, and efficiency of associated learning. In the process, a simple definition is given for conceptual knowledge, considered as information compression. The discussion concludes that domain-specific conceptual knowledge can be acquired. Among other implications of the analysis is that statistical observation of probabilities can result in the equivalent of planning, in low susceptibility to error, and in efficient learning.
Artificial Intelligence, as a maturing scientific/engineering discipline, is beginning to find its niche among the variety of subjects that are relevant to intelligent, perceptive behavior. A view of AI is presented that is based on a declarative representation of knowledge with semantic attachments to problem-specific procedures and data structures. Several important challenges to this view are briefly discussed. It is argued that research in the field would be stimulated by a project to develop a computer individual that would have a continuing existence in time.
While some proposals for supercomputers increase the powers of existing machines like CDC and Cray supercomputers, others suggest radical changes of architecture to speed up non-traditional operations such as logical inference in PROLOG, recognition/ action in production systems, or message passing. We examine the case of parallel PROLOG to identify several related computations which subsume those of parallel PROLOG, but which have much wider interest, and which may have roughly the same difficulty of mechanization. Similar considerations apply to some other proposed architectures as well, raising the possibility that current efforts may be limiting their aims unnecessarily.
AI research at JPL started in 1972 when design and construction of experimental "Mars Rover" began. Early in that effort, it was recognized that rover planning capabilities were inadequate. Research in planning was begun in 1975, and work on a succession of AI expert systems of steadily increasing power has continued to the present. Within the group, we have concentrated our efforts on expert systems, although work on vision and robotics has continued in a separate organizations, with which we have maintained informal contacts.