This paper highlights relationships among stochastic control theory, Lewis' notion of "imaging", and the representation of actions in AI systems. We show that the language of causal graphs offers a practical solution to the frame problem and its two satellites: the ramification and concurrency problems. Finally, we present a symbolic machinery that admits both probabilistic and causal information and produces probabilistic statements about the effect of actions and the impact of observations.
The objective is to assist a human/computer planner in analyzing plan tradeoffs and in assessing properties such as reliability, robustness, and ramifications under uncertain conditions. The core of this tool is a framework for reasoning about actions under uncertainty called action networks. Action networks are extensions to probabilistic causal networks (Bayes networks ) that allow the modeling of actions, their preconditions, time, and persistence. They also allow the symbolic and numeric quantification of uncertainty in the cause-effect relations at different levels of abstraction. The work on action networks was motivated by the need to extend the capabilities of current frameworks for reasoning about plans to include notions and models of uncertainty.
What gives us the audacity to expect that actions should have neat and compact representations? Why did the authors of STRIPS [Fikes & Nilsson, 1971] and BURI-DAN [Kushmerick et al., 1993] believe they could get away with such short specification for actions? Whether we take the probabilistic paradigm that actions are transformations from probability distributions to probability distributions, or the deterministic paradigm that actions are transformations from states to states, such transformations could in principle be infinitely complex. Yet, in practice, people teach each other rather quickly what actions normally do to the world, people predict the consequences of any given action without much hustle, and AI researchers are writing languages for actions as if it is a God given truth that action representation should be compact, elegant and meaningful.
The work we report in this paper is directed towards creating a tool, called Plan Simulator and Analyzer (PSA), for supporting the functionality portrayed Figure 1. Here, a human planner is confronted with a situation, an objective, and a set of courses of action or plans which are believed to achieve the objective. The human planner, is looking for assistance in simulating the behavior of each of these plans. This sinmlation entails testing the plans against different conditions in the domain (where the plans will be executed) in order to establish their degree of success, compute costs of different portions of each plan, and identify dependencies upon the relevant parameters. The final objective is to make choices regarding which plan to adopt according to pre-established criterion.
The dramatic success in machine learning has led to an explosion of artificial intelligence (AI) applications and increasing expectations for autonomous systems that exhibit human-level intelligence. These expectations have, however, met with fundamental obstacles that cut across many application areas. One such obstacle is adaptability, or robustness. Machine learning researchers have noted current systems lack the ability to recognize or react to new circumstances they have not been specifically programmed or trained for. Intensive theoretical and experimental efforts toward "transfer learning," "domain adaptation," and "lifelong learning"4 are reflective of this obstacle. Another obstacle is "explainability," or that "machine learning models remain mostly black boxes"26 unable to explain the reasons behind their predictions or recommendations, thus eroding users' trust and impeding diagnosis and repair; see Hutson8 and Marcus.11 A third obstacle concerns the lack of understanding of cause-effect connections.