In this paper we present a new approach to plan understanding that explains observed actions in terms of domain knowledge. The process operates over hierarchical methods and utilizes an incremental form of data-driven abductive inference. We report experiments on problems from the Monroe corpus that demonstrate a basic ability to construct plausible explanations, graceful degradation of performance with reduction of the fraction of actions observed, and results with incremental processing that are comparable to batch interpretation. We also discuss research on related tasks such as plan recognition and abductive construction of explanations.
Abduction has been extensively studied in propositional logic because of its many applications in artificial intelligence. However, its intrinsic complexity has been a limitation to the implementation of abductive reasoning tools in more expressive logics. We have devised such a tool in ground flat equational logic, in which literals are equations or disequations between constants. Our tool is based on the computation of prime implicates. It uses a relaxed paramodulation calculus, designed to generate all prime implicates of a formula, together with a carefully defined data structure storing the implicates and able to efficiently detect, and remove, redundancies. In addition to a detailed description of this method, we present an analysis of some experimental results.
The tendency to accept or reject arguments based on own beliefs or prior knowledge rather than on the reasoning process is called the belief-bias effect. A psychological syllogistic reasoning task shows this phenomenon, wherein participants were asked whether they accept or reject a given syllogism. We discuss one case which is commonly assumed to be believable but not logically valid. By introducing abnormalities, abduction and background knowledge, we model this case under the weak completion semantics. Our formalization reveals new questions about observations and their explanations which might include some relevant prior abductive contextual information concerning some side-effect. Inspection points, introduced by Pereira and Pinto, allow us to express these definitions syntactically and intertwine them into an operational semantics.
We investigate the use of a simple, discriminative reranking approach to plan recognition in an abductive setting. In contrast to recent work, which attempts to model abductive plan recognition using various formalisms that integrate logic and graphical models (such as Markov Logic Networks or Bayesian Logic Programs), we instead advocate a simpler, more flexible approach in which plans found through an abductive beam-search are discriminatively scored based on arbitrary features. We show that this approach performs well even with relatively few positive training examples, and we obtain state-of-the-art results on two abductive plan recognition datasets, outperforming more complicated systems.
We present a prototype game system using opportunistic storytelling, a particular commitment in the space of interactive narrative. It uses abductive event binding to deliver authored stories about the actions that the player is already taking to achieve game play goals. We show results of a simulation experiment that characterizes efficiency in delivering story event content, in terms of parameterized player motivation to follow storylines and the constraints placed on the events. Results show the value of two reasoning features in the system, incentive and look-ahead.