If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Cox, Michael T. (Wright State University) | Alavi, Zohreh (Wright State University) | Dannenhauer, Dustin (Lehigh University) | Eyorokon, Vahid (Wright State University) | Munoz-Avila, Hector (Lehigh University) | Perlis, Don (University of Maryland)
We present a metacognitive, integrated, dual-cycle architecture whose function is to provide agents with a greater capacity for acting robustly in a dynamic environment and managing unexpected events. We present MIDCA 1.3, an implementation of this architecture which explores a novel approach to goal generation, planning and execution given surprising situations. We formally define the mechanism and report empirical results from this goal generation algorithm. Finally, we describe the similarity between its choices at the cognitive level with those at the metacognitive.
We study the problem of generating a set of Finite State Machines (FSMs) modeling the behavior of multiple, distinct NPCs. We observe that nondeterministic planning techniques can be used to generate FSMs by following conventions typically used when manually creating FSMs modeling NPC behavior. We implement our ideas in DivNDP, the first algorithm for automated diverse FSM generation.
Non-player character diversity enriches game environments increasing their replay value. We propose a method for obtaining character behavior diversity based on the diversity of plans enacted by characters, and demonstrate this method in a scenario in which characters have multiple choices. Using case-based planning techniques, we reuse plans for varied character behavior, which simulate different personality traits.
We present CLASS Q-L (for: class Q-learning) an application of the Q-learning reinforcement learning algorithm to play complete Wargus games. Wargus is a real-time strategy game where players control armies consisting of units of different classes (e.g., archers, knights). CLASS Q-L uses a single table for each class of unit so that each unit is controlled and updates its class’ Q-table. This enables rapid learning as in Wargus there are many units of the same class. We present initial results of CLASS Q-L against a variety of opponents.
Diversity-aware planning consists of generating multiple plans which, while solving the same problem, are dissimilar from one another. Quantitative plan diversity is domain-independent and does not require extensive knowledge-engineering effort, but can fail to reflect plan differences that are relevant to users. Qualitative plan diversity is based on domain-specific characteristics, thus being of greater practical value, but may require substantial knowledge engineering. We demonstrate a domain-independent diverse plan generation method that is based on customizable plan distance metrics and amenable to both quantitative and qualitative diversity. Qualitative plan diversity is obtained with minimal knowledge-engineering effort, using distance metrics which incorporate domain-specific content.
Goal-driven autonomy (GDA) is a reflective model of goal reasoning that controls the focus of an agent’s planning activities by dynamically resolving unexpected discrepancies in the world state, which frequently arise when solving tasks in complex environments. GDA agents have performed well on such tasks by integrating methods for discrepancy recognition, explanation, goal formulation, and goal management. However, they require substantial domain knowledge, including what constitutes a discrepancy and how to resolve it. We introduce LGDA, a learning algorithm for acquiring this knowledge, modeled as cases, that and integrates case-based reasoning and reinforcement learning methods. We assess its utility on tasks from a complex video game environment. We claim that, for these tasks, LGDA can significantly outperform its ablations. Our evaluation provides evidence to support this claim. LGDA exemplifies a feasible design methodology for deployable GDA agents.
This issue summarizes the state of the art in structured knowledge transfer, which is an emerging approach to the general problem of knowledge acquisition and reuse. Its goal is to capture, in a general form, the internal structure of the objects, relations, strategies, and processes used to solve tasks drawn from a source domain, and exploit that knowledge to improve performance in a target domain.
We consider how to learn Hierarchical Task Networks (HTNs) for planning problems in which both the quality of solution plans generated by the HTNs and the speed at which those plans are found is important. We describe an integration of HTN Learning with Reinforcement Learning to both learn methods by analyzing semantic annotations on tasks and to produce estimates of the expected values of the learned methods by performing Monte Carlo updates. We performed an experiment in which plan quality was inversely related to plan length. In two planning domains, we evaluated the planning performance of the learned methods in comparison to two state-of-the-art satisficing classical planners, FastForward and SGPlan6, and one optimal planner, HSP*. The results demonstrate that a greedy HTN planner using the learned methods was able to generate higher quality solutions than SGPlan6 in both domains and FastForward in one. Our planner, FastForward, and SGPlan6 ran in similar time, while HSP* was exponentially slower.
The IJCAI-09 Workshop on Learning Structural Knowledge From Observations (STRUCK-09) took place as part of the International Joint Conference on Artificial Intelligence (IJCAI-09) on July 12 in Pasadena, California. The workshop program included paper presentations, discussion sessions about those papers, group discussions about two selected topic and a joint discussion.