If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Scientists use two forms of knowledge in the construction ofexplanatory models: generalized entities and processes that relatethem; and constraints that specify acceptable combinations of thesecomponents. Previous research on inductive process modeling, whichconstructs models from knowledge and time-series data, has relied onhandcrafted constraints. In this paper, we report an approach todiscovering such constraints from a set of models that have beenranked according to their error on observations. Our approach adaptsinductive techniques for supervised learning to identify processcombinations that characterize accurate models. We evaluate themethod's ability to reconstruct known constraints and to generalizewell to other modeling tasks in the same domain. Experiments with synthetic data indicate that the approach can successfully reconstructknown modeling constraints. Another study using natural data suggests that transferring constraints acquired from one modeling scenario to another within the same domain considerably reduces the amount of search for candidate model structures while retaining the most accurate ones.
Blisard, Sam (Naval Research Laboratory) | Carmichael, Ted (University of North Carolina at Charlotte) | Ding, Li (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Frost, Wende (Naval Research Laboratory) | Graesser, Arthur (University of Memphis) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Kagal, Lalana (Massachusetts Institute of Technology) | Kruijff, Geert-Jan M. (German Research Center for Artificial Intelligence) | Langley, Pat (Arizona State University) | Lester, James (North Carolina State University) | McGuinness, Deborah L. (Rensselaer Polytechnic Institute) | Mostow, Jack (Carnegie Mellon University) | Papadakis, Panagiotis (University of Sapienza, Rome) | Pirri, Fiora (Sapienza University of Rome) | Prasad, Rashmi (University of Wisconsin-Milwaukee) | Stoyanchev, Svetlana (Columbia University) | Varakantham, Pradeep (Singapore Management University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2011 Fall Symposium Series, held Friday through Sunday, November 4–6, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia are as follows: (1) Advances in Cognitive Systems; (2) Building Representations of Common Ground with Intelligent Agents; (3) Complex Adaptive Systems: Energy, Information and Intelligence; (4) Multiagent Coordination under Uncertainty; (5) Open Government Knowledge: AI Opportunities and Challenges; (6) Question Generation; and (7) Robot-Human Teamwork in Dynamic Adverse Environment. The highlights of each symposium are presented in this report.
In this article, I claim that research on cognitive architectures is an important path to the development of general intelligent systems. I contrast this paradigm with other approaches to constructing such systems, and I review the theoretical commitments associated with a cognitive architecture. I illustrate these ideas using a particular architecture -- ICARUS -- by examining its claims about memories, about the representation and organization of knowledge, and about the performance and learning mechanisms that affect memory structures. In closing, I consider ICARUS's relation to other cognitive architectures and discuss some open issues that deserve increased attention.
In this article, I claim that research on cognitive architectures is an important path to the development of general intelligent systems. I contrast this paradigm with other approaches to constructing such systems, and I review the theoretical commitments associated with a cognitive architecture. I illustrate these ideas using a particular architecture -- ICARUS -- by examining its claims about memories, about the representation and organization of knowledge, and about the performance and learning mechanisms that affect memory structures. I also consider the high-level programming language that embodies these commitments, drawing examples from the domain of in-city driving. In closing, I consider ICARUS's relation to other cognitive architectures and discuss some open issues that deserve increased attention.
Paul Cohen's book Empirical Methods for Artificial Intelligence aims to encourage this trend by providing AI practitioners with the knowledge and tools needed for careful empirical evaluation. The volume provides broad coverage of experimental design and statistics, ranging from a gentle introduction of basic ideas to a detailed presentation of advanced techniques, often combined with illustrative examples of their application to the empirical study of AI. The book is generally well written, clearly organized, and easy to understand; it contains some mathematics -- but not enough to overwhelm readers. Examples come from AI work on planning, machine learning, natural language, and diagnosis.
In this article we discuss a method for learning useful conditions on the application of operators during heuristic search. Since learning is not attempted until a complete solution path has been found for a problem, credit for correct moves and blame for incorrect moves is easily assigned. We review four learning systems that have incorporated similar techniques to learn in the domains of algebra, symbolic integration, and puzzle-solving. We conclude that the basic approach of learning from solution paths can be applied to any situation in which problems can be solved by sequential search.