Beal, Jacob (BBN Technologies) | Bello, Paul A. (Office of Naval Research) | Cassimatis, Nicholas (University of Wisconsin-Madison) | Coen, Michael H. (University of Arizona) | Cohen, Paul R. (Stottler Henke) | Davis, Alex (The MITRE Corporation) | Maybury, Mark T. (George Mason University) | Samsonovich, Alexei (Rensselaer Polytechnic Institute) | Shilliday, Andrew (University of Missouri-Columbia) | Skubic, Marjorie (Rensselaer Polytechnic Institute) | Taylor, Joshua (AFRL) | Walter, Sharon (Massachusetts Institute of Technology) | Winston, Patrick (University of Massachusetts) | Woolf, Beverly Park
The Association for the Advancement of Artificial Intelligence was pleased to present the 2008 Fall Symposium Series, held Friday through Sunday, November 7-9, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia were (1) Adaptive Agents in Cultural Contexts, (2) AI in Eldercare: New Solutions to Old Problems, (3) Automated Scientific Discovery, (4) Biologically Inspired Cognitive Architectures, (5) Education Informatics: Steps toward the International Internet Classroom, (6) Multimedia Information Extraction, and (7) Naturally Inspired AI.
Cohen, Paul R.
If it is true that good problems produce good science, then it will be worthwhile to identify good problems, and even more worthwhile to discover the attributes that make them good problems. This discovery process is necessarily empirical, so we examine several challenge problems, beginning with Turing's famous test, and more than a dozen attributes that challenge problems might have. We are led to a contrast between research strategies -- the successful "divide and conquer" strategy and the promising but largely untested "developmental" strategy -- and we conclude that good challenge problems encourage the latter strategy.
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2001 Spring Symposium Series on Monday through Wednesday, 26 to 28 March 2001, at Stanford University. The titles of the seven symposia were (1) Answer Set Programming: Toward Efficient and Scalable Knowledge, Representation and Reasoning, (2) Artificial Intelligence and Interactive Entertainment, (3) Game-Theoretic and Decision-Theoretic Agents, (4) Learning Grounded Representations, (5) Model-Based Validation of Intelligence, (6) Robotics and Education, and (7) Robust Autonomy.
Now completing its first year, the High-Performance Knowledge Bases Project promotes technology for developing very large, flexible, and reusable knowledge bases. The project is supported by the Defense Advanced Research Projects Agency and includes more than 15 contractors in universities, research laboratories, and companies.
Benchmarks, test beds, and controlled experimentation are becoming more common. We discuss these issues as they relate to research on agent design. We survey existing test beds for agents and argue for appropriate caution in their use. We end with a debate on the proper role of experimental methodology in the design and validation of planning agents.
Cohen, Paul R.
A survey of 150 papers from the Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90) shows that AI research follows two methodologies, each incomplete with respect to the goals of designing and analyzing AI systems but with complementary strengths. I propose a mixed methodology and illustrate it with examples from the proceedings.
Phoenix is a real-time, adaptive planner that manages forest fires in a simulated environment. Alternatively, Phoenix is a search for functional relationships between the designs of agents, their behaviors, and the environments in which they work. In fact, both characterizations are appropriate and together exemplify a research methodology that emphasizes complex, dynamic environments and complete, autonomous agents. This article describes the underlying methodology and illustrates the architecture and behavior of Phoenix agents.
The choice of implication as a representation for empirical associations and for deduction as a model of inference requires a mechanism extraneous to deduction to manage uncertainty associated with inference. Consequently, the interpretation of representations of uncertainty is unclear. The calculation of representativeness depends on the nature of the associations between evidence and conclusions. We discuss an expert system that uses endorsements to control the search for the most representative conclusion, given evidence.