How does one deal with the unexpected? Our world is full of surprises and we humans are often able to correctly identify a problem and respond appropriately. Consider a new driver encountering their first traffic circle; a student experiencing a hard drive failure in the middle of an assignment; an unexpected question being asked during a job interview. In situations where we have a goal (i.e., reach a destination or submit a completed assignment), we may need to alter our original plan when the unexpected occurs. Could we enable autonomous artificial intelligent agents to do the same?
A long standing area of artificial intelligence is the field of automated planning. The traditional planning problem is to generate a sequence of actions given a concrete, specific goal (e.g., I will be home at dinnertime) and a set of specific actions (e.g., drive-car, fill-gas-tank, walk, etc). Generating plans that are hopefully efficient and optimal from start to finish under different circumstances (e.g., delayed effects) is an active area of research. After a plan has been generated, and during the execution of the plan, the environment may change. For example, a robot retrieving packages in a warehouse may discover it has dropped its package. Or perhaps another robot has broken down due to a hardware failure and is blocking the path of this robot. How can a robot (or any A.I. agent) know something unexpected has happened without knowing all possible future failures?
Fundamental research on autonomy aims to find general approaches to solve this problem. One approach is to generate expectations: facts that should be true during different stages of a plan's execution. When an expectation is violated, a discrepancy occurs between the expected and perceived facts. A new trend in autonomy is to include goal reasoning capabilities. In the event of a failure, the original goal may no longer be warranted. Perhaps robust autonomous agents need to generate and change their goals in response to a changing environment.
Autonomous systems still have a long way to go and open research questions on autonomous systems remain. Funding agencies consistently seek new research on autonomy for diverse operations ranging from cybersecurity to military and vehicular autonomy. What will autonomous systems be like in the future? Will we achieve autonomous agents that can handle any situation they encounter?
- Dustin Dannenhauer
This article presents new algorithms for inferring users’ activities in a class of flexible and open-ended educational software called exploratory learning environments (ELE). Such settings provide a rich educational environment for students, but challenge teachers to keep track of students’ progress and to assess their performance. This article presents techniques for recognizing students activities in ELEs and visualizing these activities to students. It describes a new plan recognition algorithm that takes into account repetition and interleaving of activities. This algorithm was evaluated empirically using two ELEs for teaching chemistry and statistics used by thousands of students in several countries. It was able to outperform the state-of-the-art plan recognition algorithms when compared to a gold-standard that was obtained by a domain-expert. We also show that visualizing students’ plans improves their performance on new problems when compared to an alternative visualization that consists of a step-by-step list of actions.
In exploratory domains, agents' actions map onto logs of behavior that include switching between activities, extraneous actions, and mistakes. These aspects create a challenging plan recognition problem. This paper presents a new algorithm for inferring students' activities in exploratory domains that is evaluated empirically using a new type of flexible and open-ended educational software for science education. Such software has been shown to provide a rich educational environment for students, but challenge teachers to keep track of students' progress and to assess their performance. The algorithm decomposes students’ complete interaction histories to create hierarchies of interdependent tasks that describe their activities using the software. It matches students' actions to a predefined grammar in a way that reflects that students solve problems in a modular fashion but may still interleave between their activities. The algorithm was empirically evaluated on people’s interaction with two separate software systems for simulating a chemistry laboratory and for statistics education. It was separately compared to the state-of-the-art recognition algorithms for each of the software. The results show that the algorithm was able to correctly infer students' activities significantly more often than the state-of-the-art, and was able to generalize to both of the software systems with no intervention.
The experiment involved setting a fire at a fixed location and specified tinm, a ld observing the behavior of the fireboss (the planner) and the bulldozers (the agents that put out the fire). VariabiliW between trials is due to randomly changing wind speed ml,.l direction, nonuniform terrain an,1 elevation, aald the varying aanounts of time agents take in executing primitive tasks. In this experiment we collected forty variM)les oww tile course of some 340 Phoenix trials, including measuremenls of the wind speed, the outcome (su('cess or failure).